Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Ethernet won't work. Current device is Intel(R) Dual Band Wireless-AC 7265

$
0
0

Dear Support Community,

 

I've been getting these events:

Screenshot (46).png

This is what it says in the picture above:

 

Device PCI\VEN_8086&DEV_095A&SUBSYS_50108086&REV_59\4&33c74b69&0&00E2 was not migrated due to partial or ambiguous match.

Last Device Instance Id: BTH\MS_BTHPAN\6&99b7c08&0&2
Class Guid: {4d36e972-e325-11ce-bfc1-08002be10318}
Location Path:
Migration Rank: 0xF000FFFFFFFFF102
Present: false
Status: 0xC0000719

Event ID: 442

Screenshot (47).png

This is what it says in the picture above:

Device PCI\VEN_8086&DEV_095A&SUBSYS_50108086&REV_59\4&33c74b69&0&00E2 had a problem starting.

 

Driver Name: oem122.inf
Class Guid: {4d36e972-e325-11ce-bfc1-08002be10318}
Service: Netwtw04
Lower Filters:
Upper Filters: vwifibus
Problem: 0x15
Problem Status: 0x0

Event ID: 411

 

as well as these events:

 

Screenshot (48).png

Above says: Device PCI\VEN_8086&DEV_095A&SUBSYS_50108086&REV_59\4&33C74B69&0&00E2 was deleted.

Screenshot (49).png

and here it says: Device PCI\VEN_8086&DEV_095A&SUBSYS_50108086&REV_59\4&33c74b69&0&00E2 was not migrated due to partial or ambiguous match.

 

I assume that this is what's causing all the problems that I'm having with the Ethernet cable.

 

Here are the list of things that I've tried:

- Tried connecting to different Ethernet port

- Tried updating, downgrading, even uninstall and reinstall the driver for intel dual band 7265

- Tried uninstall and reinstall Realtek Ethernet.  didn't do anything

- Tried restarting (already multiple times)

 

but still the problem persist. I haven't tried resetting cmos but then I'm afraid that I might mess up my laptop. Anyone know how to fix this?

 

regards,

 

Calvin Chandra

 


Audio stutter and system freezing with Intel I350-T4V2

$
0
0

Hello everyone,

I am having an issue with my music, videos, games and general system usage coming to a brief halt every so often by high DPC reported by driver tcpip.sys, which I believe is to be related to the Intel I350-T4V2 NIC that I have in my system. This is the report from LatencyMon that was generated after nine minutes of it running:

_________________________________________________________________________________________________________

CONCLUSION

_________________________________________________________________________________________________________

Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.

LatencyMon has been analyzing your system for  0:09:15  (h:mm:ss) on all processors.

 

 

 

 

_________________________________________________________________________________________________________

SYSTEM INFORMATION

_________________________________________________________________________________________________________

Computer name:                                        STEVEN-DT

OS version:                                           Windows 10 , 10.0, build: 15063 (x64)

Hardware:                                             ASRock, Z370 Gaming K6

CPU:                                                  GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz

Logical processors:                                   12

Processor groups:                                     1

RAM:                                                  32701 MB total

 

 

 

 

_________________________________________________________________________________________________________

CPU SPEED

_________________________________________________________________________________________________________

Reported CPU speed:                                   3696 MHz

Measured CPU speed:                                   1 MHz (approx.)

 

 

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

 

 

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.

 

 

 

 

 

 

_________________________________________________________________________________________________________

MEASURED INTERRUPT TO USER PROCESS LATENCIES

_________________________________________________________________________________________________________

The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

 

 

Highest measured interrupt to process latency (µs):   64369.038961

Average measured interrupt to process latency (µs):   5.038917

 

 

Highest measured interrupt to DPC latency (µs):       64365.437229

Average measured interrupt to DPC latency (µs):       1.360576

 

 

 

 

_________________________________________________________________________________________________________

REPORTED ISRs

_________________________________________________________________________________________________________

Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

 

 

Highest ISR routine execution time (µs):              92.833333

Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Highest reported total ISR routine time (%):          0.100551

Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Total time spent in ISRs (%)                          0.122042

 

 

ISR count (execution time <250 µs):                   864900

ISR count (execution time 250-500 µs):                0

ISR count (execution time 500-999 µs):                0

ISR count (execution time 1000-1999 µs):              0

ISR count (execution time 2000-3999 µs):              0

ISR count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED DPCs

_________________________________________________________________________________________________________

DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

 

 

Highest DPC routine execution time (µs):              75280.324134

Driver with highest DPC routine execution time:       tcpip.sys - TCP/IP Driver, Microsoft Corporation

 

 

Highest reported total DPC routine time (%):          0.043887

Driver with highest DPC total execution time:         nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver, Version 388.31 , NVIDIA Corporation

 

 

Total time spent in DPCs (%)                          0.135294

 

 

DPC count (execution time <250 µs):                   3095769

DPC count (execution time 250-500 µs):                0

DPC count (execution time 500-999 µs):                1

DPC count (execution time 1000-1999 µs):              0

DPC count (execution time 2000-3999 µs):              0

DPC count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED HARD PAGEFAULTS

_________________________________________________________________________________________________________

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

 

 

 

Process with highest pagefault count:                 none

 

 

Total number of hard pagefaults                       0

Hard pagefault count of hardest hit process:          0

Highest hard pagefault resolution time (µs):          0.0

Total time spent in hard pagefaults (%):              0.0

Number of processes hit:                              0

 

 

 

 

_________________________________________________________________________________________________________

PER CPU DATA

_________________________________________________________________________________________________________

CPU 0 Interrupt cycle time (s):                       21.807476

CPU 0 ISR highest execution time (µs):                92.833333

CPU 0 ISR total execution time (s):                   8.127851

CPU 0 ISR count:                                      864377

CPU 0 DPC highest execution time (µs):                418.928030

CPU 0 DPC total execution time (s):                   7.968359

CPU 0 DPC count:                                      2881669

_________________________________________________________________________________________________________

CPU 1 Interrupt cycle time (s):                       5.576597

CPU 1 ISR highest execution time (µs):                18.037338

CPU 1 ISR total execution time (s):                   0.000633

CPU 1 ISR count:                                      523

CPU 1 DPC highest execution time (µs):                156.160173

CPU 1 DPC total execution time (s):                   0.031753

CPU 1 DPC count:                                      2473

_________________________________________________________________________________________________________

CPU 2 Interrupt cycle time (s):                       3.872798

CPU 2 ISR highest execution time (µs):                0.0

CPU 2 ISR total execution time (s):                   0.0

CPU 2 ISR count:                                      0

CPU 2 DPC highest execution time (µs):                111.264610

CPU 2 DPC total execution time (s):                   0.109259

CPU 2 DPC count:                                      30866

_________________________________________________________________________________________________________

CPU 3 Interrupt cycle time (s):                       3.914723

CPU 3 ISR highest execution time (µs):                0.0

CPU 3 ISR total execution time (s):                   0.0

CPU 3 ISR count:                                      0

CPU 3 DPC highest execution time (µs):                75280.324134

CPU 3 DPC total execution time (s):                   0.213586

CPU 3 DPC count:                                      3120

_________________________________________________________________________________________________________

CPU 4 Interrupt cycle time (s):                       4.130378

CPU 4 ISR highest execution time (µs):                0.0

CPU 4 ISR total execution time (s):                   0.0

CPU 4 ISR count:                                      0

CPU 4 DPC highest execution time (µs):                127.520022

CPU 4 DPC total execution time (s):                   0.120647

CPU 4 DPC count:                                      40343

_________________________________________________________________________________________________________

CPU 5 Interrupt cycle time (s):                       3.761527

CPU 5 ISR highest execution time (µs):                0.0

CPU 5 ISR total execution time (s):                   0.0

CPU 5 ISR count:                                      0

CPU 5 DPC highest execution time (µs):                83.285173

CPU 5 DPC total execution time (s):                   0.004639

CPU 5 DPC count:                                      1086

_________________________________________________________________________________________________________

CPU 6 Interrupt cycle time (s):                       4.832866

CPU 6 ISR highest execution time (µs):                0.0

CPU 6 ISR total execution time (s):                   0.0

CPU 6 ISR count:                                      0

CPU 6 DPC highest execution time (µs):                101.324675

CPU 6 DPC total execution time (s):                   0.199118

CPU 6 DPC count:                                      46428

_________________________________________________________________________________________________________

CPU 7 Interrupt cycle time (s):                       3.556605

CPU 7 ISR highest execution time (µs):                0.0

CPU 7 ISR total execution time (s):                   0.0

CPU 7 ISR count:                                      0

CPU 7 DPC highest execution time (µs):                82.946970

CPU 7 DPC total execution time (s):                   0.003708

CPU 7 DPC count:                                      596

_________________________________________________________________________________________________________

CPU 8 Interrupt cycle time (s):                       3.937102

CPU 8 ISR highest execution time (µs):                0.0

CPU 8 ISR total execution time (s):                   0.0

CPU 8 ISR count:                                      0

CPU 8 DPC highest execution time (µs):                136.426407

CPU 8 DPC total execution time (s):                   0.144712

CPU 8 DPC count:                                      39441

_________________________________________________________________________________________________________

CPU 9 Interrupt cycle time (s):                       3.597930

CPU 9 ISR highest execution time (µs):                0.0

CPU 9 ISR total execution time (s):                   0.0

CPU 9 ISR count:                                      0

CPU 9 DPC highest execution time (µs):                82.416126

CPU 9 DPC total execution time (s):                   0.014064

CPU 9 DPC count:                                      6686

_________________________________________________________________________________________________________

CPU 10 Interrupt cycle time (s):                       4.082868

CPU 10 ISR highest execution time (µs):                0.0

CPU 10 ISR total execution time (s):                   0.0

CPU 10 ISR count:                                      0

CPU 10 DPC highest execution time (µs):                122.268939

CPU 10 DPC total execution time (s):                   0.162320

CPU 10 DPC count:                                      36323

_________________________________________________________________________________________________________

CPU 11 Interrupt cycle time (s):                       3.982992

CPU 11 ISR highest execution time (µs):                0.0

CPU 11 ISR total execution time (s):                   0.0

CPU 11 ISR count:                                      0

CPU 11 DPC highest execution time (µs):                81.980519

CPU 11 DPC total execution time (s):                   0.038943

CPU 11 DPC count:                                      6742

_________________________________________________________________________________________________________

I have tried updating my I350's driver to the latest version (12.15.184.0) but the problem persists. I have the latest UEFI update from ASRock (v1.30) and Windows is up to date with all the latest updates. I am at a loss at what to do to solve this issue.

 

Thanks in advance.

Ixgbe driver support for X552 controller and external phy

$
0
0

Hi

 

I am using X552 controller(ADLINK com-ex7) with external phy (AQR107).

 

Just wanted to know if this configuration is supported in ixgbe driver or not. If not is there any patch available to support this configuration.

 

currently i am getting following error:

 

[    0.748518] pci 0000:03:00.1: [8086:15ad] type 00 class 0x020000

[    0.748532] pci 0000:03:00.1: reg 0x10: [mem 0xfb200000-0xfb3fffff 64bit pref]

[    0.748553] pci 0000:03:00.1: reg 0x20: [mem 0xfb600000-0xfb603fff 64bit pref]

[    0.748560] pci 0000:03:00.1: reg 0x30: [mem 0xfb800000-0xfb87ffff pref]

[    0.748606] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold

[    0.748623] pci 0000:03:00.1: reg 0x184: [mem 0xfba00000-0xfba03fff 64bit]

[    0.748625] pci 0000:03:00.1: VF(n) BAR0 space: [mem 0xfba00000-0xfbafffff 64bit] (contains BAR0 for 64 VFs)

[    0.749006] pci 0000:03:00.1: reg 0x190: [mem 0xfb900000-0xfb903fff 64bit]

[    0.749008] pci 0000:03:00.1: VF(n) BAR3 space: [mem 0xfb900000-0xfb9fffff 64bit] (contains BAR3 for 64 VFs)

[   14.380732] ixgbe 0000:03:00.1: HW Init failed: -17

[   14.382261] ixgbe: probe of 0000:03:00.1 failed with error -17

 

 

Thanks

x520-sr2

$
0
0

I have pair of supermicro servers with intel x520-sr2.  With nic teaming getting 1000's of  log events per second stating it received packets on wrong nic.  Being a team I can see this as a posibilty, but does it need to flood me with messages about it ?

 

No matching TeamNic found for packets received from member NDISIMPLATFORM\Parameters\Adapters\

Received LACPDU on Member {27EF013E-0DD4-497E-90A2-7E5AC30E6E84}. Buffer= 0x0180C2000002C4F57C565A6F880901010114008001E0520000021904008031043D0000000214000090E2BA92F1D80000000001001F0000000310050000000000000000000000000000000...

 

event id 25,26, 27

drivers are the most current -

 

 

 

 

boatmn810

Connect directly X710 to XL710 with QSFP Breakout Cable

$
0
0

Hello!

 

I have a question regarding to the direct connectability of NICs without any switch.

I have a computer equipped with Intel ethernet CNA X710-DA4FH (4x10gbps) and another computer with Intel ethernet CNA XL710-QDA1 (1x40gbps). They are connected directly using Intel X4DACBL3 Ethernet QSFP 40G QSFP+ to 4x10G Breakout Cable. Theoretically, is it a possible setup? Unfortunately, I could not make it work.

 

Thanks in advance!

 

Best regards

IPsec offload

$
0
0

Hello

i'm looking for Intel NIC  that do ipsec offload to the NIC

i saw some datashet e.g 540 that claims that HW supports ipsec offload - but there is no support in linux driver

Thanks Avi

sun quad port fastethernet adapter with intel fw21154be chipset

$
0
0

Hi

Sorry if I place it in wrong section.

I have sun quad port fastethernet adapter with intel fw21154be chipset and want to use it on PC with windows 7, but can't find any driver for windows

Can anyone send driver or link on it?

Network settings HP Z440.

$
0
0

Hey.  

 

Two workstations HP Z440 have Windows 10 version installed OEM version. 

 

Network interface from Intel in the settings of more than 30 parameters.  

 

1) How can I transfer settings from one workstation to another? 

2) Are there any tools for migrating network settings from one workstation to another?


IPsec offload

$
0
0

Hello

i'm looking for Intel NIC  that do ipsec offload to the NIC

i saw some datashet e.g 540 that claims that HW supports ipsec offload - but there is no support in linux driver

Thanks Avi

x1 vs x8 10gb performance...

$
0
0

I'm testing throughput of a D-1500-based X552 and a X520, and according to lspci their PCIe lane configuration is as follows:

 

admin@capture:~$ sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" |grep -A 1 Ethernet

04:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

04:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

However, I'm getting the exact same numbers for each. It doesn't make sense that the X552 would only get 1 lane.

 

Is what lspci is telling me wrong?

 

Thanks.

Ixgbe driver support for X552 controller and external phy

$
0
0

Hi

 

I am using X552 controller(ADLINK com-ex7) with external phy (AQR107).

 

Just wanted to know if this configuration is supported in ixgbe driver or not. If not is there any patch available to support this configuration.

 

currently i am getting following error:

 

[    0.748518] pci 0000:03:00.1: [8086:15ad] type 00 class 0x020000

[    0.748532] pci 0000:03:00.1: reg 0x10: [mem 0xfb200000-0xfb3fffff 64bit pref]

[    0.748553] pci 0000:03:00.1: reg 0x20: [mem 0xfb600000-0xfb603fff 64bit pref]

[    0.748560] pci 0000:03:00.1: reg 0x30: [mem 0xfb800000-0xfb87ffff pref]

[    0.748606] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold

[    0.748623] pci 0000:03:00.1: reg 0x184: [mem 0xfba00000-0xfba03fff 64bit]

[    0.748625] pci 0000:03:00.1: VF(n) BAR0 space: [mem 0xfba00000-0xfbafffff 64bit] (contains BAR0 for 64 VFs)

[    0.749006] pci 0000:03:00.1: reg 0x190: [mem 0xfb900000-0xfb903fff 64bit]

[    0.749008] pci 0000:03:00.1: VF(n) BAR3 space: [mem 0xfb900000-0xfb9fffff 64bit] (contains BAR3 for 64 VFs)

[   14.380732] ixgbe 0000:03:00.1: HW Init failed: -17

[   14.382261] ixgbe: probe of 0000:03:00.1 failed with error -17

 

 

Thanks

Intel Network Adapter Driver for Wins 7 - Very Slow Response On Access the tab On Windows Device Manager

$
0
0

I had downloaded and installed the latest V22.10 of the above mentioned (PROWinx64Legacy.exe) on a Beckhoff IPC running windows 7 professional x64 Edition with SP1 for the purposes of NIC teaming of the installed intel adapters.  After the successful installation, I encountered the following:

 

(1)  I found that it takes a long time (around 8 to 10 secs) for the properties window of the various intel adapter to show up once it is accessed. The added tabs (like the Teaming, the Link Speed) also behaves the same and takes a long time (says 5 to 8 secs) to be activated and display its contents. This only affect the intel network adapters.  I installed the driver on two IPCs and the observation is the same. I try to uninstall and reinstall the driver on the same 2 IPCs with no improvement.

 

Is this normal or is this a bugs in the new latest driver? Anyone encounter the same?

 

 

(2) I had the following 4 intel adapters on the said IPC.

(a) x2 intel 82574L Gigabit Network adapter (on add on PCIxe Card)

(b) x1 intel I210 Gigabit Network adapter

(c) x1intel I219-LM Gigabit Network adapter

 

 

I am not able to team the two intel 82574L adapters as a team, it gives an error: each team must include at least one Intel server device or intel integrated connection that support teaming. However I am able to team one intel 82574L adapter with either the I210 or the I219_LM adapter as a team.

What is intel integrated connection? The intel 82574L adapters are not server adapters?

 

SFP+ Not Detecting In X710-DA4

$
0
0

Hi Team,

 

We have Intel X710-DA4 NIC cards installed in our newly deployed Lenovo Thinkserver SD350. OS in the server is VMware ESXi 6.5

 

We are using other vendor SFP+ in the card but the card is not suppoting the SFP+. Link status is down and no led.

But the same SFP+ is working fine in Intel X520 NIC card in the same model server. X710 have driver i40e.

 

Please help to fix this issue.

 

Thanks!

 

X710 Flow director issues on Linux

$
0
0

Hello all,

 

I am not able to setup Flow Director to filter flow type ipv4. It did not seems to have this issue when flow type is specified as tcp.

Its on Linux(4.9.27), freshly download driver off kernel. Below there is output of the driver version, firmware and the ntuple filter  I want to apply on.

No error shown anywhere.

 

Thank you!

 

ethtool -i i40e1

driver: i40e

version: 2.0.23

firmware-version: 5.05 0x80002927 1.1313.0

expansion-rom-version:

bus-info: 0000:05:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

ethtool -k i40e1

Features for i40e1:

rx-checksumming: off

tx-checksumming: off

        tx-checksum-ipv4: off

        tx-checksum-ip-generic: off [fixed]

        tx-checksum-ipv6: off

        tx-checksum-fcoe-crc: off [fixed]

        tx-checksum-sctp: off

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

        tx-tcp-segmentation: off

        tx-tcp-ecn-segmentation: off

        tx-tcp-mangleid-segmentation: off

        tx-tcp6-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off

generic-receive-offload: off

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-gre-csum-segmentation: off [fixed]

tx-ipxip4-segmentation: on

tx-ipxip6-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-udp_tnl-csum-segmentation: off [fixed]

tx-gso-partial: off [fixed]

tx-sctp-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

i40e version:        2.0.23

 

#ethtool -U i40e1 flow-type ip4 action -1 loc 1

NIC names empty

$
0
0

Hi!

I try create team on my NIC cards.

One server i set up successful, but 2nd server show me BSOD and teamng menu looks here:

82576 card on board, 2 connections

pro/1000 is pci-e

How i can assign names back?


vxlan: non-ECT with TOS=0x02 logs were generated

$
0
0

Hi

 

I use intel 1G and 10G ethernet card with openvswitch under vxlan overlay network.

Some VM started to use ECN and I found weird logs were generated about non-ECT.

 

I checked vxlan packets, but all packets had no errors. Also, vxlan outer header's ECN was normal acording to RFC6040.

 

I changed ethernet card to Broadcom, and I found there was no non-ECT log.

I need to stop wrong non-ECT logs. How should I do?

 

Please see below informations:

 

[root@compute01 ~]# grep vxlan /var/log/messages | tail

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:33:32 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:33:34 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

 

[root@compute01 ~]# uname -a

Linux compute01 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@compute01 ~]# ethtool -i eth4

driver: ixgbe

version: 4.4.0-k-rh7.4

firmware-version: 0x80000609

expansion-rom-version:

bus-info: 0000:04:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

[root@compute01 ~]# ethtool -k eth4

Features for eth4:

rx-checksumming: on

tx-checksumming: on

        tx-checksum-ipv4: off [fixed]

        tx-checksum-ip-generic: on

        tx-checksum-ipv6: off [fixed]

        tx-checksum-fcoe-crc: on [fixed]

        tx-checksum-sctp: on

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

        tx-tcp-segmentation: on

        tx-tcp-ecn-segmentation: off [fixed]

        tx-tcp6-segmentation: on

        tx-tcp-mangleid-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: on [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: on

tx-sit-segmentation: on

tx-udp_tnl-segmentation: on

tx-mpls-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

busy-poll: on [fixed]

tx-gre-csum-segmentation: on

tx-udp_tnl-csum-segmentation: on

tx-gso-partial: on

tx-sctp-segmentation: off [fixed]

l2-fwd-offload: off

hw-tc-offload: off [fixed]

 

 

 

///////////////////////////////////////////////////////////////////

root@oscompute01:~# grep vxlan /var/log/syslog | tail 

Dec 13 18:53:50 oscompute01 kernel: [188924.432135] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.431957] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.432004] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.447808] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:52 oscompute01 kernel: [188926.447743] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:52 oscompute01 kernel: [188926.463339] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:54 oscompute01 kernel: [188928.463280] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:54 oscompute01 kernel: [188928.478883] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:55 oscompute01 kernel: [188929.463051] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:55 oscompute01 kernel: [188929.478835] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

root@oscompute01:~# uname -a

Linux oscompute01 4.4.0-103-generic #126-Ubuntu SMP Mon Dec 4 16:23:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@oscompute01:~# ethtool -i eth1

driver: igb

version: 5.3.5.12

firmware-version: 1.70, 0x80000f44, 1.1752.0

expansion-rom-version:

bus-info: 0000:04:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

root@oscompute01:~# ethtool -k eth1

Features for eth1:

rx-checksumming: on

tx-checksumming: on

tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]

scatter-gather: on

tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off [requested on]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off [fixed]

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

 

 

///////////////////////////////////////////////////////////////////

[root@compute01 ~]# tail -f /var/log/messages | grep -i vxlan

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:30 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:30 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

^C

[root@compute01 ~]# uname -a

Linux compute01 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@compute01 ~]# ethtool -i eth4

driver: ixgbe

version: 5.3.3

firmware-version: 0x800005b6, 1.1752.0

expansion-rom-version:

bus-info: 0000:04:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

[root@compute01 ~]# ethtool -k eth4

Features for eth4:

rx-checksumming: on

tx-checksumming: on

        tx-checksum-ipv4: off [fixed]

        tx-checksum-ip-generic: on

        tx-checksum-ipv6: off [fixed]

        tx-checksum-fcoe-crc: on [fixed]

        tx-checksum-sctp: on

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

        tx-tcp-segmentation: on

        tx-tcp-ecn-segmentation: off [fixed]

        tx-tcp6-segmentation: on

        tx-tcp-mangleid-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: on [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-mpls-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

busy-poll: on [fixed]

tx-gre-csum-segmentation: on

tx-udp_tnl-csum-segmentation: on

tx-gso-partial: on

tx-sctp-segmentation: off [fixed]

l2-fwd-offload: off [fixed]

hw-tc-offload: off

i210 blue screen on install and during operation

$
0
0

Hello,

 

I'm running Windows Server 2016 on a Asus P10 WS mainboard. This MB has two i210 network interfaces on-board. Currently, I'm using only one of them, the second one is disabled in Windows device manager.

 

The driver installed has a date of 08.12.2016 and a version of 12.15.184.0 (as shown in device manager.)

 

For the interface in use I have enabled teaming in Windows to use different VLANs. For a while now, there were two virtual adapters "Microsoft Network Adapter Multiplexor Driver #X" in my system, each configured with a different VLAN ID. This has been working quite well for some time now.

 

Recently, I tried to add a third VLAN. During configuration, the machine crashed with a blue screen. After reboot, the third virtual interface was there regardless. Now, I'm experiencing (seemingly) random crashes (blue screens) every once in a while. They seem to be more frequent when the new, third virtual adapter is being used a lot, but I can't really put my finger on it.

 

However, when I run the executable of the latest Intel driver package PROWinx64.exe v22.7.1, there is a reproducible blue screen as soon as the new driver is being installed. There is an additional popup by the installer that says something like "installing driver for i210 Gigabyte connection...". That's when the machine crashes consistently.

 

The memdump analysis says the following (first example is a "random" crash, second one is the "driver install" crash:)

 

***

Probably caused by : e1r65x64.sys ( e1r65x64+150ad )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000028, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff8099451abdf, address which referenced memory

***

 

***

Probably caused by : NdisImPlatform.sys ( NdisImPlatform!implatReturnNetBufferLists+1d6 )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000010, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff806566040b3, address which referenced memory

***

 

(Complete analysis logfile is available if anyone is interested.)

 

How can I fix this?

 

 

Regards

Audio stutter and system freezing with Intel I350-T4V2

$
0
0

Hello everyone,

I am having an issue with my music, videos, games and general system usage coming to a brief halt every so often by high DPC reported by driver tcpip.sys, which I believe is to be related to the Intel I350-T4V2 NIC that I have in my system. This is the report from LatencyMon that was generated after nine minutes of it running:

_________________________________________________________________________________________________________

CONCLUSION

_________________________________________________________________________________________________________

Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.

LatencyMon has been analyzing your system for  0:09:15  (h:mm:ss) on all processors.

 

 

 

 

_________________________________________________________________________________________________________

SYSTEM INFORMATION

_________________________________________________________________________________________________________

Computer name:                                        STEVEN-DT

OS version:                                           Windows 10 , 10.0, build: 15063 (x64)

Hardware:                                             ASRock, Z370 Gaming K6

CPU:                                                  GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz

Logical processors:                                   12

Processor groups:                                     1

RAM:                                                  32701 MB total

 

 

 

 

_________________________________________________________________________________________________________

CPU SPEED

_________________________________________________________________________________________________________

Reported CPU speed:                                   3696 MHz

Measured CPU speed:                                   1 MHz (approx.)

 

 

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

 

 

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.

 

 

 

 

 

 

_________________________________________________________________________________________________________

MEASURED INTERRUPT TO USER PROCESS LATENCIES

_________________________________________________________________________________________________________

The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

 

 

Highest measured interrupt to process latency (µs):   64369.038961

Average measured interrupt to process latency (µs):   5.038917

 

 

Highest measured interrupt to DPC latency (µs):       64365.437229

Average measured interrupt to DPC latency (µs):       1.360576

 

 

 

 

_________________________________________________________________________________________________________

REPORTED ISRs

_________________________________________________________________________________________________________

Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

 

 

Highest ISR routine execution time (µs):              92.833333

Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Highest reported total ISR routine time (%):          0.100551

Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Total time spent in ISRs (%)                          0.122042

 

 

ISR count (execution time <250 µs):                   864900

ISR count (execution time 250-500 µs):                0

ISR count (execution time 500-999 µs):                0

ISR count (execution time 1000-1999 µs):              0

ISR count (execution time 2000-3999 µs):              0

ISR count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED DPCs

_________________________________________________________________________________________________________

DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

 

 

Highest DPC routine execution time (µs):              75280.324134

Driver with highest DPC routine execution time:       tcpip.sys - TCP/IP Driver, Microsoft Corporation

 

 

Highest reported total DPC routine time (%):          0.043887

Driver with highest DPC total execution time:         nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver, Version 388.31 , NVIDIA Corporation

 

 

Total time spent in DPCs (%)                          0.135294

 

 

DPC count (execution time <250 µs):                   3095769

DPC count (execution time 250-500 µs):                0

DPC count (execution time 500-999 µs):                1

DPC count (execution time 1000-1999 µs):              0

DPC count (execution time 2000-3999 µs):              0

DPC count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED HARD PAGEFAULTS

_________________________________________________________________________________________________________

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

 

 

 

Process with highest pagefault count:                 none

 

 

Total number of hard pagefaults                       0

Hard pagefault count of hardest hit process:          0

Highest hard pagefault resolution time (µs):          0.0

Total time spent in hard pagefaults (%):              0.0

Number of processes hit:                              0

 

 

 

 

_________________________________________________________________________________________________________

PER CPU DATA

_________________________________________________________________________________________________________

CPU 0 Interrupt cycle time (s):                       21.807476

CPU 0 ISR highest execution time (µs):                92.833333

CPU 0 ISR total execution time (s):                   8.127851

CPU 0 ISR count:                                      864377

CPU 0 DPC highest execution time (µs):                418.928030

CPU 0 DPC total execution time (s):                   7.968359

CPU 0 DPC count:                                      2881669

_________________________________________________________________________________________________________

CPU 1 Interrupt cycle time (s):                       5.576597

CPU 1 ISR highest execution time (µs):                18.037338

CPU 1 ISR total execution time (s):                   0.000633

CPU 1 ISR count:                                      523

CPU 1 DPC highest execution time (µs):                156.160173

CPU 1 DPC total execution time (s):                   0.031753

CPU 1 DPC count:                                      2473

_________________________________________________________________________________________________________

CPU 2 Interrupt cycle time (s):                       3.872798

CPU 2 ISR highest execution time (µs):                0.0

CPU 2 ISR total execution time (s):                   0.0

CPU 2 ISR count:                                      0

CPU 2 DPC highest execution time (µs):                111.264610

CPU 2 DPC total execution time (s):                   0.109259

CPU 2 DPC count:                                      30866

_________________________________________________________________________________________________________

CPU 3 Interrupt cycle time (s):                       3.914723

CPU 3 ISR highest execution time (µs):                0.0

CPU 3 ISR total execution time (s):                   0.0

CPU 3 ISR count:                                      0

CPU 3 DPC highest execution time (µs):                75280.324134

CPU 3 DPC total execution time (s):                   0.213586

CPU 3 DPC count:                                      3120

_________________________________________________________________________________________________________

CPU 4 Interrupt cycle time (s):                       4.130378

CPU 4 ISR highest execution time (µs):                0.0

CPU 4 ISR total execution time (s):                   0.0

CPU 4 ISR count:                                      0

CPU 4 DPC highest execution time (µs):                127.520022

CPU 4 DPC total execution time (s):                   0.120647

CPU 4 DPC count:                                      40343

_________________________________________________________________________________________________________

CPU 5 Interrupt cycle time (s):                       3.761527

CPU 5 ISR highest execution time (µs):                0.0

CPU 5 ISR total execution time (s):                   0.0

CPU 5 ISR count:                                      0

CPU 5 DPC highest execution time (µs):                83.285173

CPU 5 DPC total execution time (s):                   0.004639

CPU 5 DPC count:                                      1086

_________________________________________________________________________________________________________

CPU 6 Interrupt cycle time (s):                       4.832866

CPU 6 ISR highest execution time (µs):                0.0

CPU 6 ISR total execution time (s):                   0.0

CPU 6 ISR count:                                      0

CPU 6 DPC highest execution time (µs):                101.324675

CPU 6 DPC total execution time (s):                   0.199118

CPU 6 DPC count:                                      46428

_________________________________________________________________________________________________________

CPU 7 Interrupt cycle time (s):                       3.556605

CPU 7 ISR highest execution time (µs):                0.0

CPU 7 ISR total execution time (s):                   0.0

CPU 7 ISR count:                                      0

CPU 7 DPC highest execution time (µs):                82.946970

CPU 7 DPC total execution time (s):                   0.003708

CPU 7 DPC count:                                      596

_________________________________________________________________________________________________________

CPU 8 Interrupt cycle time (s):                       3.937102

CPU 8 ISR highest execution time (µs):                0.0

CPU 8 ISR total execution time (s):                   0.0

CPU 8 ISR count:                                      0

CPU 8 DPC highest execution time (µs):                136.426407

CPU 8 DPC total execution time (s):                   0.144712

CPU 8 DPC count:                                      39441

_________________________________________________________________________________________________________

CPU 9 Interrupt cycle time (s):                       3.597930

CPU 9 ISR highest execution time (µs):                0.0

CPU 9 ISR total execution time (s):                   0.0

CPU 9 ISR count:                                      0

CPU 9 DPC highest execution time (µs):                82.416126

CPU 9 DPC total execution time (s):                   0.014064

CPU 9 DPC count:                                      6686

_________________________________________________________________________________________________________

CPU 10 Interrupt cycle time (s):                       4.082868

CPU 10 ISR highest execution time (µs):                0.0

CPU 10 ISR total execution time (s):                   0.0

CPU 10 ISR count:                                      0

CPU 10 DPC highest execution time (µs):                122.268939

CPU 10 DPC total execution time (s):                   0.162320

CPU 10 DPC count:                                      36323

_________________________________________________________________________________________________________

CPU 11 Interrupt cycle time (s):                       3.982992

CPU 11 ISR highest execution time (µs):                0.0

CPU 11 ISR total execution time (s):                   0.0

CPU 11 ISR count:                                      0

CPU 11 DPC highest execution time (µs):                81.980519

CPU 11 DPC total execution time (s):                   0.038943

CPU 11 DPC count:                                      6742

_________________________________________________________________________________________________________

I have tried updating my I350's driver to the latest version (12.15.184.0) but the problem persists. I have the latest UEFI update from ASRock (v1.30) and Windows is up to date with all the latest updates. I am at a loss at what to do to solve this issue.

 

Thanks in advance.

Intel® PRO/1000 PT Server Adapter esn code?

$
0
0

I need to know the esn code for PRO/1000 PT SERVER ADAPTER.

x710 SR-IOV problems

$
0
0

Hi all,

 

I have following baseline:

 

Dell R630 (2x14 core Xeon, 128GB RAM, 800GB SSD)

x710 4-port NIC, in 10Gbit mode

SUSE12SP1

Latest NIC firmware but default PF/VF drivers (came with OS, v1,3,4)

VF driver blacklisted on hypervisor

Setup according to official Intel and Suse documentation, KVM hypervisor

 

With test setup, single VM with single VF and untagged traffic, I could achieve basically line-rate numbers: with MTU 1500, there were about 770Kpps and BW of 9.4Gbps, achieved both for UDP and TCP traffic, with no packet drops. There is plenty of processing power, setup is nice and tidy and everything works as it should.

 

Production setup is a bit different: VM is using 3 VFs, one for each PF (4th PF is not being used). All VFs except first one use untagged traffic. First VF is passing two types of traffic: first one untagged (VLAN 119) and second one tagged (VLAN 1108). Tagging is done inside VM. Setup worked fine for some time, confirming test setup numbers. However, after some time following errors started to appear in hypervisor logs:

 

Mar  11 14:32:52 test_machine1 kernel: [10423.889924] i40e 0000:01:00.1: TXdriverissuedetected on VF 0

Mar  11 14:32:52 test_machine1 kernel: [10423.889925] i40e 0000:01:00.1: Too many MDD events on VF 0, disabled

 

And performance numbers became erratic: sometimes it worked perfectly, sometimes it did not. But most importantly, packet drops occured.

 

So, I've reinstalled everything (hypevisor and VMs), configured exactly as before using automated tools, but upgraded PF and VF drivers to latest ones (v2.0.19/v2.0.16). Errors in logs disappeared, but issue persists. Now I have this in logs:

 

2017-03-12T11:33:43.356014+01:00 test_machine1 kernel: [  420.439112] i40e 0000:01:00.1: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:43.376009+01:00 test_machine1 kernel: [  420.459168] i40e 0000:01:00.0: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:44.352009+01:00 test_machine1 kernel: [  421.435124] i40e 0000:01:00.2: Unable to add VLAN filter 0 for VF 0, error -22

 

I've increased VM CPU count number, VF ring sizes, turnet off VF spoofcheck in hypervisor, VM linux software buffers, VM netdev.budget kernel parameter (amount of CPU time assinged for NIC processing) etc. but situation remains the same. Sometimes works perfectly, other time it does not.

 

Can you please provide some insight? Since rx_dropped counter is increasing in VM, I am suspecting driver/VF issue.

Is there a way to handle this problem, without switching to untagged traffic?

 

 

 

Thank you in advance,

Ante

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>