Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Is it possible to connect XL710-QDA2 directly using XLDACBL3 - without switch, does it work in full duplex mode?

$
0
0

Hi All,

 

I am using two DELL Poweredge R430 servers and each server has one dual port XL710-QDA2 [1]. Now we want to connect these two NICs directly using XLDACBL3 [2] cable, but we are not sure if the setup will work. On  server 1, we are running DPDK based L2Fwd application and on the other server DPDK-pktgen is generating the packets. Server 1 returns the packets back to server 2.

 

So can we use XLDACBL3 cables to connect these NICs directly and will they work in full-duplex mode?

 

[1]: Intel® Ethernet Converged Network Adapter XL710 10/40 GbE

[2]: QSFP+ Modules/Cables Compatible with Intel® Ethernet Server Adapter...

 

Thanks,

Anmol


X710 LLDP broken with ESXi

$
0
0

Hi. We have Dell PowerEdge 730s that have Intel X710 cards. LLDP does not work at all. Latest Dell firmware for the card and driver from the VMware HCL. We are running latest ESXi v6.5. I have noticed some threads that LLDP just does not work.

 

This is very important to us. Is there a time frame when this will be fixed? Thanks,,,

sr-iov invoke system reboot

$
0
0

Dears, I use sr-iov vf for docker network device by kubernetes, but there has a bigger probability of system reboot

syslogs record nothing, and only happened when there are release & allocate VF

what should I do for solve this problem?

 

My environment:

  BIOS already enabled VT-d and SR-IOV global enable

  os is ubuntu 16.04, kernel version 4.10, options is

       BOOT_IMAGE=/vmlinuz-4.10.0-35-lowlatency root=UUID=xxxx ro cgroup_enable=memory swapaccount=1 intel_iommu=on iommu=pt

  set max_vfs to 63

       echo 63 >/sys/class/net/enp4s0f1/device/sriov_numvfs

 

  PCI device info

04:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

04:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

.....

 

   $ ethtool -i enp4s0f1

driver: ixgbe

version: 4.4.0-k

firmware-version: 0x800007f5

expansion-rom-version:

bus-info: 0000:04:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

Ixgbe driver support for X552 controller and external phy

$
0
0

Hi

 

I am using X552 controller(ADLINK com-ex7) with external phy (AQR107).

 

Just wanted to know if this configuration is supported in ixgbe driver or not. If not is there any patch available to support this configuration.

 

currently i am getting following error:

 

[    0.748518] pci 0000:03:00.1: [8086:15ad] type 00 class 0x020000

[    0.748532] pci 0000:03:00.1: reg 0x10: [mem 0xfb200000-0xfb3fffff 64bit pref]

[    0.748553] pci 0000:03:00.1: reg 0x20: [mem 0xfb600000-0xfb603fff 64bit pref]

[    0.748560] pci 0000:03:00.1: reg 0x30: [mem 0xfb800000-0xfb87ffff pref]

[    0.748606] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold

[    0.748623] pci 0000:03:00.1: reg 0x184: [mem 0xfba00000-0xfba03fff 64bit]

[    0.748625] pci 0000:03:00.1: VF(n) BAR0 space: [mem 0xfba00000-0xfbafffff 64bit] (contains BAR0 for 64 VFs)

[    0.749006] pci 0000:03:00.1: reg 0x190: [mem 0xfb900000-0xfb903fff 64bit]

[    0.749008] pci 0000:03:00.1: VF(n) BAR3 space: [mem 0xfb900000-0xfb9fffff 64bit] (contains BAR3 for 64 VFs)

[   14.380732] ixgbe 0000:03:00.1: HW Init failed: -17

[   14.382261] ixgbe: probe of 0000:03:00.1 failed with error -17

 

 

Thanks

Intel Network Adapter Driver for Wins 7 - Very Slow Response On Access the tab On Windows Device Manager

$
0
0

I had downloaded and installed the latest V22.10 of the above mentioned (PROWinx64Legacy.exe) on a Beckhoff IPC running windows 7 professional x64 Edition with SP1 for the purposes of NIC teaming of the installed intel adapters.  After the successful installation, I encountered the following:

 

(1)  I found that it takes a long time (around 8 to 10 secs) for the properties window of the various intel adapter to show up once it is accessed. The added tabs (like the Teaming, the Link Speed) also behaves the same and takes a long time (says 5 to 8 secs) to be activated and display its contents. This only affect the intel network adapters.  I installed the driver on two IPCs and the observation is the same. I try to uninstall and reinstall the driver on the same 2 IPCs with no improvement.

 

Is this normal or is this a bugs in the new latest driver? Anyone encounter the same?

 

 

(2) I had the following 4 intel adapters on the said IPC.

(a) x2 intel 82574L Gigabit Network adapter (on add on PCIxe Card)

(b) x1 intel I210 Gigabit Network adapter

(c) x1intel I219-LM Gigabit Network adapter

 

 

I am not able to team the two intel 82574L adapters as a team, it gives an error: each team must include at least one Intel server device or intel integrated connection that support teaming. However I am able to team one intel 82574L adapter with either the I210 or the I219_LM adapter as a team.

What is intel integrated connection? The intel 82574L adapters are not server adapters?

 

Reset adapter unexpectedly

$
0
0

I have a system I use as a gateway:

Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)

 

It is a component of a fit/PC Intense running Linux is the 4.9.67-1-lts #1 SMP kernel from Arch Linux.  This problem has been going on for several kernels now over the last month.

 

Intense PC models specifications

 

It has been having trouble reseting and possibly check summing.  I use a ssh proxy tunnel which has always remained open and stable to panix.com.  Now it dies

 

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 3: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 44: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 45: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 46: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 47: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 48: new [direct-tcpip]

ssh_dispatch_run_fatal: Connection to 166.84.1.2 port 22: message authentication code incorrect

[ruben@flatbush ~]$

 

from dmesg:

[38972.211043] ffff88041f798da0: f4 6e af f6 83 02 1d 62 8c 12 0f 1a e9 99 89 cf  .n.....b........

[38972.211044] ffff88041f798db0: d3 b9 fc d0 d4 7a dc 96 d0 b0 f9 c3 bb fe c7 ea  .....z..........

[38972.211046] ffff88041f798dc0: 39 51 ea 2c 6d 80 64 43 59 da 15 91 3a 6c 40 05  9Q.,m.dCY...:l@.

[38972.211047] ffff88041f798dd0: a6 50 40 98 d0 bb cf f3 c7 1f e2 41 bd ec 19 05  .P@........A....

[38972.211049] ffff88041f798de0: ab 51 d0 9a 46 90 9b db 3b b2 fd c1 c9 c0 ef 7b  .Q..F...;......{

[38972.211051] ffff88041f798df0: 2a 45 2b f6 fb 42 96 56 af 45 89 23 99 51 4e ef  *E+..B.V.E.#.QN.

[38972.211053] ffff88041f798e00: fd 6c 9a df 51 66 db 3e ca d9 4c f2 ad cb 89 7a  .l..Qf.>..L....z

[38972.211054] ffff88041f798e10: c4 10 47 21 fc a5 4d f3 06 79 1f 8a ba d1 45 53  ..G!..M..y....ES

[38972.211056] ffff88041f798e20: bb 68 0b 8a ab 63 ae b2 1c 38 b0 35 6f 37 66 ec  .h...c...8.5o7f.

[38972.211057] ffff88041f798e30: 02 4a                                            .J

[38972.211062] e1000e 0000:00:19.0 eth1: Reset adapter unexpectedly

[38978.157727] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

[39014.162430] elogind[1369]: New session c4 of user ruben.

[39089.012361] lp: driver loaded but no devices found

[39089.040701] st: Version 20160209, fixed bufsize 32768, s/g segs 256

 

 

It has been suggested that I can monkey with the module settings, but I'm not certain what to try or to procede with.  Guessing blindly is usually not useful

 

 

modinfo give me this:

 

alias:      pci:v00008086d0000105Esv*sd*bc*sc*i*
depends:    ptp
intree:     Y
vermagic:   4.9.67-1-lts SMP mod_unload modversions
parm:       debug:Debug level (0=none,...,16=all) (int)
parm:       copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm:       TxIntDelay:Transmit Interrupt Delay (array of int)
parm:       TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm:       RxIntDelay:Receive Interrupt Delay (array of int)
parm:       RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm:       InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm:       IntMode:Interrupt Mode (array of int)
parm:       SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm:       KumeranLockLoss:Enable Kumeran lock loss workaround (array of int)
parm:       WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int)
parm:       CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int)

I210 External Flash Programming with a Tegra X2

$
0
0

I'm making a carrier card for a Tegra X2 computer on module (running Linux 4.4.38-tegra aarch64) and using the Intel I210 to get a second Ethernet port.  I've verified that the I210 works with the Tegra using an external Flash chip programmed before putting it on the board with an image copied from an I210 adapter card Flash.  To make it easier to manufacture I'd like to program the Flash in-circuit.  I've seen the BootUtil and EEUPDATE used to program the Flash and have downloaded both.  Is one of these better for an application like this?

 

I'm new to Linux and relying on Google to fill in the blanks, so I've used "sudo -i" to get into root where the EEUPDATE guide says to and followed the reset of the install for Linus directions as I could, it says its successful but doesn't seem to be working.

 

root@lamont-tx2|~> cd Temporary/

root@lamont-tx2|Temporary> ls

install  iqvlinux.tar.gz

root@lamont-tx2|Temporary> chmod 777 install

root@lamont-tx2|Temporary> ls

install  iqvlinux.tar.gz

root@lamont-tx2|Temporary> ./install

Extracting archive..OK!

Compiling the driver...OK!

Removing existing iqvlinux.ko driver...OK

Copying iqvlinux.ko driver file to /lib/modules directory...OK!

Driver installation verification...Installed!

root@lamont-tx2|~> exit

logout

nvidia@lamont-tx2|/> EEUPDATE

-bash: EEUPDATE: command not found

 

From reading the guide it sounds like the EEUPDATE without options should give me a list of devices.  What am I not doing or is this a larger problem?  The I210 shows up using the lspci as an un-programmed device.

 

nvidia@lamont-tx2|~> lspci

00:01.0 PCI bridge: NVIDIA Corporation Device 10e5 (rev a1)

01:00.0 Ethernet controller: Intel Corporation Device 1531 (rev 03)

 

The BootUtil looks like the better choice but my inexperience with Linux is getting in the way.  I've stopped with the BootUtil at the "2.  Compile the driver module." from the guide.  I've found the following directions for compiling a driver but I'm not sure if that's what I need to do with it.

Compiling Drivers for Linux and Adding Them to your Linux Automation Image | Symantec Connect

 

To clarify what I'm after, can I program the Flash in-circuit with the Tegra X2, if so is BootUtil or EEUPDATE a better choice, am I on the right trail with using these tools,  is there any help out there to get me down the road quicker?

 

Any help would be very appreciated.

 

Thanks,

 

LaMont

Intel I210-T1 and disabling MCTP

$
0
0

So I noticed the I210-T1 supports MCTP and not seeing where I can disable this, checked the firmware boot config guide and it was conspicuously blank: Upgrade, Enable, or Disable Flash with the Intel® Ethernet Flash...

 

Also anybody know if the I210-T1 hooks into IME if the onboard (which IME normally uses) is disabled, i.e. I get the feeling Intel is sneaky like that.

 

Message was edited by: Peter Thoenen - Typo


X710-card no working module

$
0
0

"You cannot use other brands of SFP+ (10 Gbps) modules with these adapters".

 

Dear Intel you are loosing client. Today we have problem in our network, we have X710 cards but no working modules.. it is last time we buying it.

NIC names empty

$
0
0

Hi!

I try create team on my NIC cards.

One server i set up successful, but 2nd server show me BSOD and teamng menu looks here:

82576 card on board, 2 connections

pro/1000 is pci-e

How i can assign names back?

Audio stutter and system freezing with Intel I350-T4V2

$
0
0

Hello everyone,

I am having an issue with my music, videos, games and general system usage coming to a brief halt every so often by high DPC reported by driver tcpip.sys, which I believe is to be related to the Intel I350-T4V2 NIC that I have in my system. This is the report from LatencyMon that was generated after nine minutes of it running:

_________________________________________________________________________________________________________

CONCLUSION

_________________________________________________________________________________________________________

Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.

LatencyMon has been analyzing your system for  0:09:15  (h:mm:ss) on all processors.

 

 

 

 

_________________________________________________________________________________________________________

SYSTEM INFORMATION

_________________________________________________________________________________________________________

Computer name:                                        STEVEN-DT

OS version:                                           Windows 10 , 10.0, build: 15063 (x64)

Hardware:                                             ASRock, Z370 Gaming K6

CPU:                                                  GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz

Logical processors:                                   12

Processor groups:                                     1

RAM:                                                  32701 MB total

 

 

 

 

_________________________________________________________________________________________________________

CPU SPEED

_________________________________________________________________________________________________________

Reported CPU speed:                                   3696 MHz

Measured CPU speed:                                   1 MHz (approx.)

 

 

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

 

 

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.

 

 

 

 

 

 

_________________________________________________________________________________________________________

MEASURED INTERRUPT TO USER PROCESS LATENCIES

_________________________________________________________________________________________________________

The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

 

 

Highest measured interrupt to process latency (µs):   64369.038961

Average measured interrupt to process latency (µs):   5.038917

 

 

Highest measured interrupt to DPC latency (µs):       64365.437229

Average measured interrupt to DPC latency (µs):       1.360576

 

 

 

 

_________________________________________________________________________________________________________

REPORTED ISRs

_________________________________________________________________________________________________________

Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

 

 

Highest ISR routine execution time (µs):              92.833333

Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Highest reported total ISR routine time (%):          0.100551

Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Total time spent in ISRs (%)                          0.122042

 

 

ISR count (execution time <250 µs):                   864900

ISR count (execution time 250-500 µs):                0

ISR count (execution time 500-999 µs):                0

ISR count (execution time 1000-1999 µs):              0

ISR count (execution time 2000-3999 µs):              0

ISR count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED DPCs

_________________________________________________________________________________________________________

DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

 

 

Highest DPC routine execution time (µs):              75280.324134

Driver with highest DPC routine execution time:       tcpip.sys - TCP/IP Driver, Microsoft Corporation

 

 

Highest reported total DPC routine time (%):          0.043887

Driver with highest DPC total execution time:         nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver, Version 388.31 , NVIDIA Corporation

 

 

Total time spent in DPCs (%)                          0.135294

 

 

DPC count (execution time <250 µs):                   3095769

DPC count (execution time 250-500 µs):                0

DPC count (execution time 500-999 µs):                1

DPC count (execution time 1000-1999 µs):              0

DPC count (execution time 2000-3999 µs):              0

DPC count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED HARD PAGEFAULTS

_________________________________________________________________________________________________________

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

 

 

 

Process with highest pagefault count:                 none

 

 

Total number of hard pagefaults                       0

Hard pagefault count of hardest hit process:          0

Highest hard pagefault resolution time (µs):          0.0

Total time spent in hard pagefaults (%):              0.0

Number of processes hit:                              0

 

 

 

 

_________________________________________________________________________________________________________

PER CPU DATA

_________________________________________________________________________________________________________

CPU 0 Interrupt cycle time (s):                       21.807476

CPU 0 ISR highest execution time (µs):                92.833333

CPU 0 ISR total execution time (s):                   8.127851

CPU 0 ISR count:                                      864377

CPU 0 DPC highest execution time (µs):                418.928030

CPU 0 DPC total execution time (s):                   7.968359

CPU 0 DPC count:                                      2881669

_________________________________________________________________________________________________________

CPU 1 Interrupt cycle time (s):                       5.576597

CPU 1 ISR highest execution time (µs):                18.037338

CPU 1 ISR total execution time (s):                   0.000633

CPU 1 ISR count:                                      523

CPU 1 DPC highest execution time (µs):                156.160173

CPU 1 DPC total execution time (s):                   0.031753

CPU 1 DPC count:                                      2473

_________________________________________________________________________________________________________

CPU 2 Interrupt cycle time (s):                       3.872798

CPU 2 ISR highest execution time (µs):                0.0

CPU 2 ISR total execution time (s):                   0.0

CPU 2 ISR count:                                      0

CPU 2 DPC highest execution time (µs):                111.264610

CPU 2 DPC total execution time (s):                   0.109259

CPU 2 DPC count:                                      30866

_________________________________________________________________________________________________________

CPU 3 Interrupt cycle time (s):                       3.914723

CPU 3 ISR highest execution time (µs):                0.0

CPU 3 ISR total execution time (s):                   0.0

CPU 3 ISR count:                                      0

CPU 3 DPC highest execution time (µs):                75280.324134

CPU 3 DPC total execution time (s):                   0.213586

CPU 3 DPC count:                                      3120

_________________________________________________________________________________________________________

CPU 4 Interrupt cycle time (s):                       4.130378

CPU 4 ISR highest execution time (µs):                0.0

CPU 4 ISR total execution time (s):                   0.0

CPU 4 ISR count:                                      0

CPU 4 DPC highest execution time (µs):                127.520022

CPU 4 DPC total execution time (s):                   0.120647

CPU 4 DPC count:                                      40343

_________________________________________________________________________________________________________

CPU 5 Interrupt cycle time (s):                       3.761527

CPU 5 ISR highest execution time (µs):                0.0

CPU 5 ISR total execution time (s):                   0.0

CPU 5 ISR count:                                      0

CPU 5 DPC highest execution time (µs):                83.285173

CPU 5 DPC total execution time (s):                   0.004639

CPU 5 DPC count:                                      1086

_________________________________________________________________________________________________________

CPU 6 Interrupt cycle time (s):                       4.832866

CPU 6 ISR highest execution time (µs):                0.0

CPU 6 ISR total execution time (s):                   0.0

CPU 6 ISR count:                                      0

CPU 6 DPC highest execution time (µs):                101.324675

CPU 6 DPC total execution time (s):                   0.199118

CPU 6 DPC count:                                      46428

_________________________________________________________________________________________________________

CPU 7 Interrupt cycle time (s):                       3.556605

CPU 7 ISR highest execution time (µs):                0.0

CPU 7 ISR total execution time (s):                   0.0

CPU 7 ISR count:                                      0

CPU 7 DPC highest execution time (µs):                82.946970

CPU 7 DPC total execution time (s):                   0.003708

CPU 7 DPC count:                                      596

_________________________________________________________________________________________________________

CPU 8 Interrupt cycle time (s):                       3.937102

CPU 8 ISR highest execution time (µs):                0.0

CPU 8 ISR total execution time (s):                   0.0

CPU 8 ISR count:                                      0

CPU 8 DPC highest execution time (µs):                136.426407

CPU 8 DPC total execution time (s):                   0.144712

CPU 8 DPC count:                                      39441

_________________________________________________________________________________________________________

CPU 9 Interrupt cycle time (s):                       3.597930

CPU 9 ISR highest execution time (µs):                0.0

CPU 9 ISR total execution time (s):                   0.0

CPU 9 ISR count:                                      0

CPU 9 DPC highest execution time (µs):                82.416126

CPU 9 DPC total execution time (s):                   0.014064

CPU 9 DPC count:                                      6686

_________________________________________________________________________________________________________

CPU 10 Interrupt cycle time (s):                       4.082868

CPU 10 ISR highest execution time (µs):                0.0

CPU 10 ISR total execution time (s):                   0.0

CPU 10 ISR count:                                      0

CPU 10 DPC highest execution time (µs):                122.268939

CPU 10 DPC total execution time (s):                   0.162320

CPU 10 DPC count:                                      36323

_________________________________________________________________________________________________________

CPU 11 Interrupt cycle time (s):                       3.982992

CPU 11 ISR highest execution time (µs):                0.0

CPU 11 ISR total execution time (s):                   0.0

CPU 11 ISR count:                                      0

CPU 11 DPC highest execution time (µs):                81.980519

CPU 11 DPC total execution time (s):                   0.038943

CPU 11 DPC count:                                      6742

_________________________________________________________________________________________________________

I have tried updating my I350's driver to the latest version (12.15.184.0) but the problem persists. I have the latest UEFI update from ASRock (v1.30) and Windows is up to date with all the latest updates. I am at a loss at what to do to solve this issue.

 

Thanks in advance.

PTP protocal disabled on x710

$
0
0

I have recently found the PTP protocol was disabled on x710 due to security reason (L4 timestamping rejected, intel-SA-00063) . Just wondering any plan to enable PTP in the future?

Using QCU to config 4x10G on XL710-QDA2: QSFP+ Configuration modification is not supported by this adapter.

$
0
0

It takes us more than one week to fix this problem, but failed. Our server is built as belows

 

Motherboard: Intel S2600CW2R

CPU: E5 2660 V4

Network Adapter: XL710-QDA2

OS: Linux Ubuntu 16.04

 

We are using Intel QSFP Configuration Utility to config ports as 4x10G mode. All NVMs and drivers have been updated to the latest version. Here are the details of our log file. Anybody could give us some help? Thanks in advance.

 

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ sudo ./qcu64e /DEVICES

[sudo] password for tb-user:

Intel(R) QSFP+ Configuration Utility

 

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

 

NIC Seg:Bus Ven-Dev   Mode    Adapter Name

=== ======= ========= ======= ==================================================

1) 000:004 8086-1583 N/A     Intel(R) Ethernet Converged Network Adapter XL710-

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ sudo ./qcu64e /NIC=1 /set=4x10

Intel(R) QSFP+ Configuration Utility

 

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

 

QSFP+ Configuration modification is not supported by this adapter.

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ ifconfig -a

enp1s0f0  Link encap:Ethernet  HWaddr a4:bf:01:1b:89:9e

          inet addr:150.236.28.199  Bcast:150.236.28.223  Mask:255.255.255.224

          inet6 addr: fe80::5622:3a76:5c53:30f4/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:4079 errors:0 dropped:0 overruns:0 frame:0

          TX packets:322 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:472072 (472.0 KB)  TX bytes:38743 (38.7 KB)

          Memory:91920000-9193ffff

 

enp1s0f1  Link encap:Ethernet  HWaddr a4:bf:01:1b:89:9f

          UP BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

          Memory:91900000-9191ffff

 

ens261f0  Link encap:Ethernet  HWaddr 00:16:f6:f6:10:0a

          UP BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

ens261f1  Link encap:Ethernet  HWaddr 00:16:f6:f6:10:0b

          UP BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:138 errors:0 dropped:0 overruns:0 frame:0

          TX packets:138 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:12084 (12.0 KB)  TX bytes:12084 (12.0 KB)

 

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ sudo ethtool ens3f2

Settings for ens3f2:

Cannot get device settings: No such device

Cannot get wake-on-lan settings: No such device

Cannot get message level: No such device

Cannot get link status: No such device

No data available

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ sudo ethtool ens261f0

Settings for ens261f0:

        Supported ports: [ ]

        Supported link modes:   40000baseCR4/Full

                                40000baseSR4/Full

                                40000baseLR4/Full

        Supported pause frame use: Symmetric

        Supports auto-negotiation: Yes

        Advertised link modes:  40000baseCR4/Full

        Advertised pause frame use: No

        Advertised auto-negotiation: Yes

        Speed: Unknown!

        Duplex: Unknown! (255)

        Port: Other

        PHYAD: 0

        Transceiver: external

        Auto-negotiation: off

        Supports Wake-on: g

        Wake-on: g

        Current message level: 0x00000007 (7)

                               drv probe link

        Link detected: no

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ sudo ethtool ens261f1

Settings for ens261f1:

        Supported ports: [ ]

        Supported link modes:   40000baseCR4/Full

                                40000baseSR4/Full

                                40000baseLR4/Full

        Supported pause frame use: Symmetric

        Supports auto-negotiation: Yes

        Advertised link modes:  40000baseCR4/Full

        Advertised pause frame use: No

        Advertised auto-negotiation: Yes

        Speed: Unknown!

        Duplex: Unknown! (255)

        Port: Other

        PHYAD: 0

        Transceiver: external

        Auto-negotiation: off

        Supports Wake-on: g

        Wake-on: g

        Current message level: 0x00000007 (7)

                               drv probe link

        Link detected: no

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$ lspci | grep Ethernet

01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

04:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

04:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

tb-user@tbuser-S2600CWR:~/LinuxTool/Linux_x64$

Intel X520-2 Adapter with direct SFP+ Cable to Dlink DGS 1210 48 Switch How ?

$
0
0

Hello,

 

first of all im totally new with the SFP+ Cable.

 

i try to connect my Dlink DGS 1210 48 Switch with the Intel X520-2 Adapter on the Fileserve with an 3M SFP+ Cables|10GbE SFP+ Cable|Cisco/HP/H3C Compatible|CAB-10GSFP-P3M  ordered with the Intel standard XDACBL3M

 

The Dlink Switch has an Amber Light from at the Connection-Port ( in the Manual there is no amber status for this slot but nvm there is ANY light)

 

The Fileserver Adapter says "no network cable pluged". i tried to install a new driver from Intel but nothing happens. So i plugged the beginning and the End of the cable into the Intel Adapter and finally all the lights going to blink and the status switch to connected..

 

After some massiv abuse of Google i stumbled over all sort of Standards and every manufacture has its own, thats why you can buy dlink cables and intel cables and cisco and netgear and so on...

 

So my Situation is i cant set up anything on the Dlink Side at this Ports (enable or disable) and there is an amber light blink.. on the other side the Cable works with the Intel Adapter..

 

i searched for cables from Intel to Dlink Standards.. and just found one windy reseller in another country that have this sort of cable. So i think that is not the standard to get a workaround to this Dilema. So i want to ask what is the common workaround for this Problem ? Do IT Department wire the cable for themself or is there an Adapter or Software Solution ?

 

thank you for helping me out.

 

Badb3nd3r

Perte d'internet après 5 ou 10min en ethernet sous windows 10

$
0
0

Bonjour, 

 

Suite à mon passage à Windows 10 que je viens d'installé sur mon nouveau SSD, je rencontre un problème très bizarre, en effet, dès que j'allume mon pc je suis normalement connecté à internet et tout marche à merveille mais au bout d'un certain temps, (environ 5 à 10min) je perds complètement la connexion internet, je n'arrive plus à ouvrir aucune page ( pas même google).

PS : L’icône réseau en bas de l'écran n'affiche aucune anomalie.

 

J'ai beau essayé de mettre à jour mon driver Ethernet avec le dernier pilote fournis sur le site Intel, j'ai essayé de mettre à jour le Windows via Windows Update à la toute dernière version mais rien n'a changé! J'ai aussi essayé le reset des paramètres réseau par le biais des commandes sur "cmd" ça fonctionne au début mais par la suite le problème revient et c'est toujours après quelques minutes d'utilisation. J'ai pensé ensuite à faire une réinstallation complète du windows mais ça n'a pas solutionné le problème. Pour le moment, je me contente d'une solution de dépannage :

 

- Passer par la commande cmd suivante : ipconfig/flushdns qui fait immédiatement revenir la connexion internet bizarrement.

 

 

Autre info très importante qui peut aider, c'est que quand j'utilise skype ou autre logiciel qui utilise internet, ça continue à marcher malgré que je n'ai pas d'internet sur mon explorer, par contre, il ne faut pas que je quitte le logiciel pour le relancer, parce que ça ne marchera pas. On dirait que c'est une histoire de port ou je sais pas quoi ...

 

Pour info je dispose d'une carte réseau : Intel ethernet connection i219-v

Carte mère : Asus Z170 pro gaming

Processeur : Core i5 6600k

SSD : Samsung 850evo

RAM : Corsair 8Gox2

 

Merci pour votre aide d'avance.


x1 vs x8 10gb performance...

$
0
0

I'm testing throughput of a D-1500-based X552 and a X520, and according to lspci their PCIe lane configuration is as follows:

 

admin@capture:~$ sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" |grep -A 1 Ethernet

04:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

04:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

However, I'm getting the exact same numbers for each. It doesn't make sense that the X552 would only get 1 lane.

 

Is what lspci is telling me wrong?

 

Thanks.

X710 LLDP broken with ESXi

$
0
0

Hi. We have Dell PowerEdge 730s that have Intel X710 cards. LLDP does not work at all. Latest Dell firmware for the card and driver from the VMware HCL. We are running latest ESXi v6.5. I have noticed some threads that LLDP just does not work.

 

This is very important to us. Is there a time frame when this will be fixed? Thanks,,,

will ixgbe driver support ndo_set_vf_link_state defined in struct net_device_ops?

$
0
0

my nic is 2 port x520: lspci | grep Ethernet

41:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

41:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

 

sr-iov enabled via : echo 4 > /sys/class/net/p1p1/device/sriov_numvfs

ip link show p1p1

 

p1p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000

    link/ether 38:4c:4f:29:08:fc brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 52:54:00:13:25:bc, vlan 99, spoof checking on, link-state auto, trust off, query_rss off

    vf 1 MAC 52:54:00:46:20:35, vlan 99, spoof checking on, link-state auto, trust off, query_rss off

    vf 2 MAC 52:04:0a:08:01:01, spoof checking on, link-state auto, trust off, query_rss off

    vf 3 MAC 52:04:0a:08:01:02, spoof checking on, link-state auto, trust off, query_rss off

 

and failed set link-state via: ip link set p1p1 vf 0 state enable

RTNETLINK answers: Operation not supported

 

i am using CentOS 7.4, and checked in ixgbe driver source from centos,there is no ndo_set_vf_link_state handler.

i also checked ixgbe driver from intel driver download from version 3.x to version 5.3.4(latest for now), and no ndo_set_vf_link_state implemention.

 

is there a plan to support ndo_set_vf_link_state? or some patch available?

 

thanks a lot.

Losing internet connexion after 5min on ethernet

$
0
0

Hello,   Following my move to Windows 10 that I just installed on my new SSD, I encounter a very strange problem, indeed, as soon as I turn on my pc I am normally connected to the internet and everything works well but after a while, (about 5 to 10min) I completely lose the internet connection, I can not open any more pages (not even google).

PS: The network icon at the bottom of the screen does not show any anomalies.   I tried to update my Ethernet driver with the latest driver provided on the Intel site, I tried to update the Windows via Windows Update to the latest version but nothing has changed! I also tried to reset the network settings through the commands on "cmd" it works at first but later the problem comes back and it's still after a few minutes of use. I then thought to do a complete reinstallation of the windows but it did not solve the problem. For now, I'm getting a temporary solution:   - Going through the following cmd command: ipconfig / flushdns which makes the internet connection come back.

Another very important info that can help is that when I use skype or other software that uses the internet, it continues to work despite that I do not have an internet on my explorer, however, if i leave the software to restart it, it will not work again. FYI I have a network card: Intel ethernet connection i219-v Motherboard: Asus Z170 pro gaming Processor: Core i5 6600k SSD: Samsung 850evo RAM: Corsair 8Gox2   Thank you for your help in advance.

I210 Network Teaming SFT Failover 12 Second Delay

$
0
0

Teaming failover takes ~12 seconds using I210 adapters on Windows 10. Teaming failover with previous system configurations occurred under 1 second. The Wireshark capture is attached showing the data transfer from PC2 (192.168.28.72) to PC1 (192.168.28.71). The failover was induced after packet 368. Transfer resumes at packet 392 after ~12.6 seconds. Is it possible to reduce the failover time?

 

System Specs: ANS 22.10 driver, SFT teamed I210 adapters with default power management and default advanced settings, Windows 10.1.0 Enterprise Build 1607

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>