Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Determine transceiver module serial number from i40e (XL710) pf driver ?

$
0
0

How can the transceiver module's serial number be extracted in the pf driver ? Also is it possible to extract the transceiver serial number in the VF driver ?


i40e XL710 QDA2 as iSCSI initiator results in "RX driver issue detected, PF reset issued", and iscsi ping timeouts

$
0
0

Hi!

 

There is a dual E5-2690v3 box based on  Supermicro SYS-2028GR-TR/X10DRG-H, BIOS 1.0c, running Ubuntu 16.04.1, w. all current updates.

It has a XL710-QDA2 card, fw 5.0.40043 api 1.5 nvm 5.04 0x80002537, driver 1.5.25 (the stock Ubuntu i40e driver 1.4.25 resulted in a crash), that is planned to be used as an iSCSI initiator endpoint. But there seems to be a problem: the log file fills up with "RX driver issue detected" messages and occasionally the iSCSI link resets as ping times out. This is critical error, as the mounted device becomes unusable!

 

So, Question 1: Is there something that can be done to fix the iSCSI behaviour of the XL710 card? When testing the card with iperf (2 concurrent sessions, the other end had a 10G NIC), there were no problems. The problems started when the iSCSI connection was established.

 

Question 2: Is there a way to force the card to work in PCI Express 2.0 mode? The server downgraded the card once after several previous failures and then it became surprisingly stable. I cannot find a way to make it persist though.

 

Some excerpts from log files (there are also occasional TX driver issues, but much less frequently than RX problems):

 

 

[  263.116057] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[  321.030246] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[  332.512601] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[  481.001787] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[  487.183237] NOHZ: local_softirq_pending 08

[  491.151322] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

..lots of the above messages...

[ 1181.099046] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1199.852665]  connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4295189627, last ping 4295190878, now 4295192132

[ 1199.852694]  connection1:0: detected conn error (1022)

[ 1320.412312]  session1: session recovery timed out after 120 secs

[ 1320.412325] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412331] sd 10:0:0:0: [sdk] killing request

[ 1320.412347] sd 10:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK

[ 1320.412352] sd 10:0:0:0: [sdk] CDB: Write Same(10) 41 00 6b 40 69 00 00 08 00 00

[ 1320.412356] blk_update_request: I/O error, dev sdk, sector 1799383296

[ 1320.412411] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412423] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412428] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412433] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412438] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412442] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412446] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412451] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412455] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412460] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412464] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412469] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412473] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412477] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412482] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412486] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412555] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412566] Aborting journal on device sdk-8.

[ 1320.412571] sd 10:0:0:0: rejecting I/O to offline device

[ 1320.412576] JBD2: Error -5 detected when updating journal superblock for sdk-8.

[ 1332.831851] sd 10:0:0:0: rejecting I/O to offline device

[ 1332.831864] EXT4-fs error (device sdk): ext4_journal_check_start:56: Detected aborted journal

[ 1332.831869] EXT4-fs (sdk): Remounting filesystem read-only

[ 1332.831873] EXT4-fs (sdk): previous I/O error to superblock detected

 

Unloading the kernel module and modprobe-ing it again:

 

[ 1380.970732] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 1.5.25

[ 1380.970737] i40e: Copyright(c) 2013 - 2016 Intel Corporation.

[ 1380.987563] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.127289] i40e 0000:81:00.0: MAC address: 3c:xx:xx:xx:xx:xx

[ 1381.246815] i40e 0000:81:00.0 p5p1: renamed from eth0

[ 1381.358723] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

[ 1381.416135] i40e 0000:81:00.0: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.454729] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1381.471584] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

[ 1381.605866] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xy

[ 1381.712287] i40e 0000:81:00.1 p5p2: renamed from eth0

[ 1381.751417] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.810607] IPv6: ADDRCONF(NETDEV_UP): p5p2: link is not ready

[ 1381.820095] i40e 0000:81:00.1: PCI-Express: Speed 8.0GT/s Width x8

[ 1381.826141] i40e 0000:81:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF DCB VxLAN Geneve NVGRE PTP VEPA

[ 1647.123056] EXT4-fs (sdk): recovery complete

[ 1647.123414] EXT4-fs (sdk): mounted filesystem with ordered data mode. Opts: (null)

[ 1668.179234] NOHZ: local_softirq_pending 08

[ 1673.994586] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1676.871805] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1692.833097] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1735.179086] NOHZ: local_softirq_pending 08

[ 1767.357902] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

[ 1803.828762] i40e 0000:81:00.0: RX driver issue detected, PF reset issued

 

After several failures, the card loaded in PCI-Express 2.0 mode. It became stable then:

 

Jan  1 18:44:35  systemd[1]: Started ifup for p5p1.

Jan  1 18:44:35  systemd[1]: Found device Ethernet Controller XL710 for 40GbE QSFP+ (Ethernet Converged Network Adapter XL710-Q2).

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5028] devices added (path: /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/p5p1, iface: p5p1)

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5029] locking wired connection setting

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5029] get unmanaged devices count: 3

Jan  1 18:44:35  avahi-daemon[1741]: Joining mDNS multicast group on interface p5p1.IPv4 with address xx.xx.xx.xx.

Jan  1 18:44:35  avahi-daemon[1741]: New relevant interface p5p1.IPv4 for mDNS.

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.5577] device (p5p1): link connected

Jan  1 18:44:35  avahi-daemon[1741]: Registering new address record for xx.xx.xx.xx on p5p1.IPv4.

Jan  1 18:44:35  kernel: [11572.541797] i40e 0000:81:00.0 p5p1: NIC Link is Up 40 Gbps Full Duplex, Flow Control: None

Jan  1 18:44:35  kernel: [11572.579303] i40e 0000:81:00.0: PCI-Express: Speed 5.0GT/s Width x8

Jan  1 18:44:35  kernel: [11572.579309] i40e 0000:81:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.

Jan  1 18:44:35  kernel: [11572.579312] i40e 0000:81:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.

Jan  1 18:44:35  kernel: [11572.617328] i40e 0000:81:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 48 RX: 1BUF RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA

Jan  1 18:44:35  kernel: [11572.635294] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.04 0x80002537 0.0.0

Jan  1 18:44:35  kernel: [11572.917343] i40e 0000:81:00.1: MAC address: 3c:xx:xx:xx:xx:xx

Jan  1 18:44:35  systemd[1]: Reloading OpenBSD Secure Shell server.

Jan  1 18:44:35  systemd[1]: Reloaded OpenBSD Secure Shell server.

Jan  1 18:44:35  kernel: [11572.921344] i40e 0000:81:00.1: SAN MAC: 3c:xx:xx:xx:xx:xx

Jan  1 18:44:35  NetworkManager[1911]: <warn>  [1483289075.9656] device (eth0): failed to find device 14 'eth0' with udev

Jan  1 18:44:35  NetworkManager[1911]: <info>  [1483289075.9671] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/13)

Jan  1 18:44:35  kernel: [11572.976596] i40e 0000:81:00.1 p5p2: renamed from eth0

 

Kind regards,

 

jpe

What is VC_CRT_x64 version 1.02.0000?

$
0
0

What installs it and is it relevant with Intel Network Connections 21.1.30.0?  We are very security conscience and want to remove this software if it has no purpose. I am wondering if this was installed by an older version of Intel Network Connections and is no longer needed. Is this just a registry file that can be removed?

X540-AT2 speed problem

$
0
0

Hi!

I a have Intel s2600wt motherboard this two X540-AT2 ehernet adapters. Link speed in auto negatiation is only 100mbps, then i set 1000mbps on switch or in ethernet adapter properties - link lost. My os is Windows Server 2012 R2 and my switch is Cisco 6509 running on ios  version 15.1(2)SY9. Also i tried  Cisco 2960G with ios Version 15.0(2)SE8 this same result.

Flow Director configuration not working

$
0
0

Hi:

I am using the i40e_4.4.0 driver for XL710 Network Card. Currently, I am trying to loopback the connections.

 

For this purpose, I had to set the two ports on promiscuous mode. Thus, using my application, I crated custom UDP packets.

 

For the Rx Queue setting, I have set the flow director as:

 

ethtool -N ens1f0 flow-type udp4 dst-port 319 action 3 loc

ethtool -N ens1f1 flow-type udp4 dst-port 319 action 3 loc

 

Essentially, I want all the packets with this dst-port to be forwarded to Queue 3. I can also see the rule has been inserted.

 

But, as seen in the attached picture, the flow director is not able to match the incoming packet. Thus, it does not forward the incoming packet to my desired queue.

 

proc-interrupts.png

 

Is this error due to promiscuous mode that I had set on the NIC ports ?

 

I am not sure what's creating this issue. Also, I have verified that the incoming packet is destined for Port 319.

 

I will be able to provide other details, if needed !

 

I would appreciate any help.

 

Thanks !

IES api install problem & HNI Driver

$
0
0

Hi all,

 

I am setting up the test environment for FM10000. I downloaded IES( Intel Ethernet Switch Software ) api and tried to install it in Unbuntu 16.04 LTS. I followed the guideline in some Documents .Generally, what I do is cd to ies/src and type command

" sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents".

I go back and check my folder( /home/brayn/Documents ) , there is nothing installed in it so I assume I  failed ? Below is the message shows on terminal.

 

brayn@brayn-Ultra-27:~/Documents/ies/src$ sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents

make[1]: Entering directory '/home/brayn/Documents/ies/src'

/bin/mkdir -p '/usr/local/lib'

/bin/bash ../libtool   --mode=install /usr/bin/install -c   libFocalpointSDK.la libLTStdPlatform.la '/usr/local/lib'

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK-4.1.3_0378_00314560.so /usr/local/lib/libFocalpointSDK-4.1.3_0378_00314560.so

libtool: install: (cd /usr/local/lib && { ln -s -f libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so || { rm -f libFocalpointSDK.so && ln -s libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so; }; })

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.lai /usr/local/lib/libFocalpointSDK.la

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform-4.1.3_0378_00314560.so /usr/local/lib/libLTStdPlatform-4.1.3_0378_00314560.so

libtool: install: (cd /usr/local/lib && { ln -s -f libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so || { rm -f libLTStdPlatform.so && ln -s libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so; }; })

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.lai /usr/local/lib/libLTStdPlatform.la

libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.a /usr/local/lib/libFocalpointSDK.a

libtool: install: chmod 644 /usr/local/lib/libFocalpointSDK.a

libtool: install: ranlib /usr/local/lib/libFocalpointSDK.a

libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.a /usr/local/lib/libLTStdPlatform.a

libtool: install: chmod 644 /usr/local/lib/libLTStdPlatform.a

libtool: install: ranlib /usr/local/lib/libLTStdPlatform.a

libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/sbin" ldconfig -n /usr/local/lib

----------------------------------------------------------------------

Libraries have been installed in:

   /usr/local/lib

 

 

If you ever happen to want to link against installed libraries

in a given directory, LIBDIR, you must either use libtool, and

specify the full pathname of the library, or use the `-LLIBDIR'

flag during linking and do at least one of the following:

   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable

     during execution

   - add LIBDIR to the `LD_RUN_PATH' environment variable

     during linking

   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag

   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

 

 

See any operating system documentation about shared libraries for

more information, such as the ld(1) and ld.so(8) manual pages.

----------------------------------------------------------------------

make[1]: Nothing to be done for 'install-data-am'.

make[1]: Leaving directory '/home/brayn/Documents/ies/src'

 

Can anyone have experience help solve this problem?

 

Also I can't find HNI ( Host Network Interface) driver mentioned in the guideline, Where should I download it ?  Thanks!

Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi,

I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.

I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity returns is EBUSY, but also sometimes it returns ENOSPC.

More investigation on the topic showed whenever EBUSY was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC, it takes several reboots for the problem to disappear.

In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.

Difference in DPDK and Native IXGBE driver support for 82599 NIC

$
0
0

Hello All,

 

We have been trying to make Unicast promiscuous mode work with RHEL7.3 with latest native ixgbe driver (ixgbe-5.1.3), but it seems that unicast promiscuous mode is not enabled for 82599 series nic cards in the native driver.

I can see an explicit check in ixgbe_sriov.c code, where before enabling promiscuous mode, it checks if NIC card is equal(or lower) than 82599EB, it returns.

 

Adding snippet below:

        case IXGBEVF_XCAST_MODE_PROMISC:

                if (hw->mac.type <= ixgbe_mac_82599EB)

                        return -EOPNOTSUPP;

 

 

                fctrl = IXGBE_READ_REG(hw, IXGBE_FCTRL);

                if (!(fctrl & IXGBE_FCTRL_UPE)) {

                        /* VF promisc requires PF in promisc */

                        e_warn(drv,

                               "Enabling VF promisc requires PF in promisc\n");

                        return -EPERM;

                }

 

 

                disable = 0;

                enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |

                         IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;

                break;

 

But, when I see the corresponding code in DPDK16.11 version, I can see the support has been added for 82599 NICs family. The feature seems to have implemented using IXGBE_VMOLR_ROPE  flag.

 

Relevant snippet from DPDK code:

uint32_t

ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)

{

        uint32_t new_val = orig_val;

 

        if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)

                new_val |= IXGBE_VMOLR_AUPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)

                new_val |= IXGBE_VMOLR_ROMPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)

                new_val |= IXGBE_VMOLR_ROPE;

        if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)

                new_val |= IXGBE_VMOLR_BAM;

        if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)

                new_val |= IXGBE_VMOLR_MPE;

 

        return new_val;

}

 

 

So, can you please let us know, why such difference between supported NIC ? and can we also have similar functionality ported to the native ixgbe driver?

 

Other setup details

 

Kernel version

# uname -r

3.10.0-514.el7.x86_64

 

LSPCI output

# lspci -nn | grep Ether | grep 82599

81:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)

 

# ethtool -i eth2

driver: ixgbe

version: 5.1.3

firmware-version: 0x61bd0001

expansion-rom-version:

bus-info: 0000:81:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

 

Regards

Pratik


X710 Flow director issues on Linux

$
0
0

Hello all,

 

I am not able to setup Flow Director to filter flow type ipv4. It did not seems to have this issue when flow type is specified as tcp.

Its on Linux(4.9.27), freshly download driver off kernel. Below there is output of the driver version, firmware and the ntuple filter  I want to apply on.

No error shown anywhere.

 

Thank you!

 

ethtool -i i40e1

driver: i40e

version: 2.0.23

firmware-version: 5.05 0x80002927 1.1313.0

expansion-rom-version:

bus-info: 0000:05:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

ethtool -k i40e1

Features for i40e1:

rx-checksumming: off

tx-checksumming: off

        tx-checksum-ipv4: off

        tx-checksum-ip-generic: off [fixed]

        tx-checksum-ipv6: off

        tx-checksum-fcoe-crc: off [fixed]

        tx-checksum-sctp: off

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

        tx-tcp-segmentation: off

        tx-tcp-ecn-segmentation: off

        tx-tcp-mangleid-segmentation: off

        tx-tcp6-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off

generic-receive-offload: off

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-gre-csum-segmentation: off [fixed]

tx-ipxip4-segmentation: on

tx-ipxip6-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-udp_tnl-csum-segmentation: off [fixed]

tx-gso-partial: off [fixed]

tx-sctp-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

i40e version:        2.0.23

 

#ethtool -U i40e1 flow-type ip4 action -1 loc 1

Ethernet Controller X710 for 10GbE SFP+

$
0
0

Hello,
I have a problem connecting a DELL server with the INTEL X710 + SFP Intel FTLX8571D3BCVIT1 with OS Redhat 7.3, and a Junipers MX960, when we cut the port or disconnect the cable, our server says that the link Is always connected! Have you already had this type of problem because I do not understand, if any can help me stay available for more information? thank you

 

MAx

Does the I340-F4 support 100base-FX SFP modules

$
0
0

We am looking for a quad port PCI express card with 100base-FX interfaces, under Windows 7 32-bit. Does the I340-F4 support 100base-FX SFP modules?

I am not sure if SFP modules are plug-and-play for different speeds / wavelengths.

 

As background, we currently are using I340-T4 cards with external media convertors between copper and the external 100base-FX systems, and are looking for a replacement card with support for 100base-FX so can avoid the need to user media convertors. The required number of ports, and available motherboard slots, limit us to quad port PCI express cards.

What is VC_CRT_x64 version 1.02.0000?

$
0
0

What installs it and is it relevant with Intel Network Connections 21.1.30.0?  We are very security conscience and want to remove this software if it has no purpose. I am wondering if this was installed by an older version of Intel Network Connections and is no longer needed. Is this just a registry file that can be removed?

SR-IOV with IXGBE - Vlan packets getting spoofed

$
0
0

Hi All,

 

I am using RHEL7.3 with Intel-82599ES nic cards to launch VMs with SRIOV enabled nic cards. I am using configuring only one VF per PF. I am configuring this VF with vlan, trust mode on and disabling spoof chk.

But, when I am sending vlan tagged packets from Guest VM, I can see the "spoofed packet detected" message in dmesg for this PF card.

We have also disabled the rx/tx vlan offload using ethtool command.

 

Here are setup details:

Kernel version

# uname -r

3.10.0-514.el7.x86_64

 

PF/VF configuration:

# ip link show eth2

4: eth2: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9192 qdisc mq state UP mode DEFAULT qlen 1000

    link/ether 90:e2:ba:a5:98:7c brd ff:ff:ff:ff:ff:ff

    vf 0 MAC fa:16:3e:73:12:6c, vlan 1500, spoof checking off, link-state auto, trust on

 

IXGBE version

# ethtool -i eth2

driver: ixgbe

version: 4.4.0-k-rh7.3

firmware-version: 0x61bd0001

expansion-rom-version:

bus-info: 0000:81:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

 

Messages from dmesg

[441100.018278] ixgbe 0000:81:00.0 eth2: 3 Spoofed packets detected

[441102.022383] ixgbe 0000:81:00.0 eth2: 2 Spoofed packets detected

[441104.026460] ixgbe 0000:81:00.0 eth2: 3 Spoofed packets detected

[441106.030516] ixgbe 0000:81:00.0 eth2: 2 Spoofed packets detected

 

 

LSPCI output

# lspci -nn | grep Ether | grep 82599

81:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)

81:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)

 

 

Ethtool -k output

# ethtool -k eth2 | grep vlan

rx-vlan-offload: off

tx-vlan-offload: off

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

 

Please let me know, if you any need any other information.

 

Regards

Pratik

Flow Director configuration not working

$
0
0

Hi:

I am using the i40e_4.4.0 driver for XL710 Network Card. Currently, I am trying to loopback the connections.

 

For this purpose, I had to set the two ports on promiscuous mode. Thus, using my application, I crated custom UDP packets.

 

For the Rx Queue setting, I have set the flow director as:

 

ethtool -N ens1f0 flow-type udp4 dst-port 319 action 3 loc

ethtool -N ens1f1 flow-type udp4 dst-port 319 action 3 loc

 

Essentially, I want all the packets with this dst-port to be forwarded to Queue 3. I can also see the rule has been inserted.

 

But, as seen in the attached picture, the flow director is not able to match the incoming packet. Thus, it does not forward the incoming packet to my desired queue.

 

proc-interrupts.png

 

Is this error due to promiscuous mode that I had set on the NIC ports ?

 

I am not sure what's creating this issue. Also, I have verified that the incoming packet is destined for Port 319.

 

I will be able to provide other details, if needed !

 

I would appreciate any help.

 

Thanks !

Intel(R) Ethernet Connection (2) I219-V whats with the (2)?

$
0
0

I have an Asus Z170 Pro Gaming motherboard with the I219-V LAN port but even on a fresh Windows 7 or Windows 10 installation, the port always shows up as having the (2) prefix even though its the only LAN port available and its the first time drivers have been installed for it (for that OS installation after formatting the HDD). The driver version I'm currently using is 12.15.25.6 from earlier this year.

 

I'm pretty sure that at some stage I've seen it as simply Intel(R) Ethernet Connection I219-V and I'd like to get back to that but how? FWIW the hardware is working fine, I'm just fussy about my system configuration and would like this to be as originally intended.

I219-V.jpg


intel pro/1000 pt bricked?

$
0
0

I was trying to install an intel pro/1000 pt dual-port card into a Ubuntu linux server computer and ran into the "NVM Checksum is Invalid" message.  I tried to run the bootutil utility on the card, but now only ports 1 & 2 show and the mac addresses are gone.  Is this card salvageable or is it bricked?  I ordered another single-port card, but does anyone know the proper way to handle this issue?

 

Brian

XL710-QDA2 with Intel QSFP-40G-SR4 SFP

$
0
0

Hi All,

 

  We have "Intel Ethernet Converged Network Adapter XL710-QDA2".

 

  1) Configured in 2x40 mode

 

  2) The two ports on the board are connected back to back with below SFP:

       

        MACROREER

        40GBase-CU QSFP+ Cable 1m

       

      We are able to observe that the link status on both ports is UP. And, we are able to run tests between the ports.

     

  3) When we use the below Intel SFP's on both ports connected with MTP 24 fibre optical cable, the ports are not coming UP (Link status shows down)

 

        E40GQSFPSR

        QSFP-40G-SR4  16-26

        Class 1  21CFR1040.10

        LN#50 6/2007

        FTL410QE2C-IT

 

     Please help us in understanding why the ports are not coming up with the above mentioned Intel SPF's (point 3).

Is the optical cable we are using compatible with the SPF's or need to use some other optical cable or SFP to bring the ports UP?

Should we configure the board in any specific mode ?

 

Looking ahead for your response

Thanks in advance

 

-- Anand Prasad

How do I enable rx-fcs and rx-all for Intel X710 10-Gigabit SFI/SFP+ Network Card using i40evf v1.4.15?

$
0
0

I have tried "ethtool -K <devicename> rx-fcs on"  but it does not work and shows this feature as "fixed."  Is there a way to enable this feature using a different method?

 

Note: "Failed to get lock in i40evf_set_rx_mode" in dmesg output below:

 

dev@ccapdp:/vob_yukon/farid/share$ dmesg |egrep evf

[    1.569952] i40evf: module verification failed: signature and/or required key missing - tainting kernel

[    1.680148] i40evf: Intel(R) 40-10 Gigabit Virtual Function Network Driver - version 1.4.15

[    1.750595] i40evf 0000:00:08.0: Multiqueue Enabled: Queue pair count = 4

[    1.751048] i40evf 0000:00:08.0: MAC address: 52:54:00:1b:ad:0c

[    1.751049] i40evf 0000:00:08.0: GRO is enabled

[    1.758741] i40evf 0000:00:0a.0: Multiqueue Enabled: Queue pair count = 4

[    1.759209] i40evf 0000:00:0a.0: MAC address: 52:54:00:b2:ec:90

[    1.759211] i40evf 0000:00:0a.0: GRO is enabled

[    1.759655] i40evf 0000:00:0b.0: Multiqueue Enabled: Queue pair count = 4

[    1.760098] i40evf 0000:00:0b.0: MAC address: 52:54:00:a5:5a:9f

[    1.760099] i40evf 0000:00:0b.0: GRO is enabled

[    1.760501] i40evf 0000:00:09.0: Multiqueue Enabled: Queue pair count = 4

[    1.761010] i40evf 0000:00:09.0: MAC address: 52:54:00:11:b6:d4

[    1.761011] i40evf 0000:00:09.0: GRO is enabled

[    1.762715] i40evf 0000:00:0f.0: Multiqueue Enabled: Queue pair count = 4

[    1.763193] i40evf 0000:00:0f.0: MAC address: 52:54:00:37:64:16

[    1.763194] i40evf 0000:00:0f.0: GRO is enabled

[    3.890945] i40evf 0000:00:09.0: Failed to get lock in i40evf_set_rx_mode

[    3.891501] i40evf 0000:00:09.0: Failed to get lock in i40evf_set_rx_mode

[    3.891738] i40evf 0000:00:09.0: Failed to get lock in i40evf_set_rx_mode

[    3.924301] i40evf 0000:00:08.0: Failed to get lock in i40evf_set_rx_mode

[    3.925066] i40evf 0000:00:08.0: Failed to get lock in i40evf_set_rx_mode

[    3.925255] i40evf 0000:00:08.0: Failed to get lock in i40evf_set_rx_mode

[    3.930514] i40evf 0000:00:0a.0: Failed to get lock in i40evf_set_rx_mode

[    3.931277] i40evf 0000:00:0a.0: Failed to get lock in i40evf_set_rx_mode

[    3.931493] i40evf 0000:00:0a.0: Failed to get lock in i40evf_set_rx_mode

[    3.937261] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[    3.937947] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[    3.938125] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[    3.971121] i40evf 0000:00:0a.0: Failed to get lock in i40evf_set_rx_mode

[    3.972654] i40evf 0000:00:0b.0: Failed to get lock in i40evf_set_rx_mode

[    3.973811] i40evf 0000:00:0a.0: Failed to get lock in i40evf_set_rx_mode

[    3.974587] i40evf 0000:00:0b.0: Failed to get lock in i40evf_set_rx_mode

[    3.975528] i40evf 0000:00:08.0: Failed to get lock in i40evf_set_rx_mode

[    3.977633] i40evf 0000:00:08.0: Failed to get lock in i40evf_set_rx_mode

[    3.980099] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[    3.982872] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  587.017753] i40evf 0000:00:0f.0: Device is still in reset (-16), retrying

[  587.049125] i40evf 0000:00:0b.0: Device is still in reset (-16), retrying

[  587.081267] i40evf 0000:00:08.0: Device is still in reset (-16), retrying

[  587.113735] i40evf 0000:00:09.0: Device is still in reset (-16), retrying

[  588.079143] i40evf 0000:00:0f.0: Multiqueue Enabled: Queue pair count = 4

[  588.079779] i40evf 0000:00:0f.0: MAC address: 52:54:00:37:64:16

[  588.079783] i40evf 0000:00:0f.0: GRO is enabled

[  588.110108] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  588.111238] i40evf 0000:00:0b.0: Multiqueue Enabled: Queue pair count = 4

[  588.112924] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  588.114463] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  588.116152] i40evf 0000:00:0b.0: MAC address: 52:54:00:a5:5a:9f

[  588.116161] i40evf 0000:00:0b.0: GRO is enabled

[  588.147038] i40evf 0000:00:08.0: Multiqueue Enabled: Queue pair count = 4

[  588.147553] i40evf 0000:00:08.0: MAC address: 52:54:00:1b:ad:0c

[  588.147557] i40evf 0000:00:08.0: GRO is enabled

[  588.258219] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  588.261729] i40evf 0000:00:0f.0: Failed to get lock in i40evf_set_rx_mode

[  589.175108] i40evf 0000:00:09.0: Multiqueue Enabled: Queue pair count = 4

[  589.175644] i40evf 0000:00:09.0: MAC address: 52:54:00:11:b6:d4

[  589.175648] i40evf 0000:00:09.0: GRO is enabled

Intel PRO 1000 CT Desktop adapter - WOL in Windows 10 does not work

$
0
0

Dear all,

 

the Intel PRO 1000 CT Desktop Adapter (EXPI9301CT ) is supporting WOL.

But it does not work when you try to use it in a Windows 10 machine.

I think the reason is the Driver. Because Intel does not provide a Driver for Windows 10.

The inbox Driver is used instead.

The Intel Driver has it's own tab for the powermanagement, where the WOL Options can be set.

The Windows inbox Driver only has the Default Windows Settings.

 

Has anybody an idea how to use WOL with the PRO 1000 CT Adapter?

 

Thanks.

Intel 82579V drops connection every hour on windows 10

$
0
0

Hi, I have an Acer Aspire X3960 small desktop with Intel 82579V. Ever since upgrading to Windows 10 from Windows 7 I have had problems with dropped connections. I have tried various drivers from Intel and Microsoft, old and new, without any success (currently on driver version 12.15.22.6 from Microsoft from 2016, which seems to be the most stable. Still, the connection is dropping almost every hour. It usually picks up after waiting a few minutes, otherwise I need to go through disabling and enabling the network adapter to get it going again.

Any suggestions?

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>