Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

How do I enable rx-fcs and rx-all for Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Card?

$
0
0

Currently, I have the following:

 

# ethtool -k p1p2

Features for p1p2:

rx-checksumming: on

tx-checksumming: on

    tx-checksum-ipv4: on

    tx-checksum-ip-generic: off [fixed]

    tx-checksum-ipv6: on

    tx-checksum-fcoe-crc: on [fixed]

    tx-checksum-sctp: on

scatter-gather: on

    tx-scatter-gather: on

    tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

    tx-tcp-segmentation: on

    tx-tcp-ecn-segmentation: off [fixed]

    tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: on

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: on [fixed]

tx-gre-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

fcoe-mtu: off [fixed]

tx-nocache-copy: on

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

 

I need rx-fcs and rx-all on. The only message I could find on the topic was from 2014, and said that this might be supported in a future release.

 

We're using:

 

driver: ixgbe

version: 5.0.4

firmware-version: 0x18b30001

bus-info: 0000:04:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

Linux HOSTNAME 3.10.105-1.el6.elrepo.x86_64 #1 SMP Fri Feb 10 10:48:08 EST 2017 x86_64 x86_64 x86_64 GNU/Linux

 

LSB Version:    :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch

Distributor ID:    CentOS

Description:    CentOS release 6.8 (Final)

Release:    6.8

Codename:    Final

 

Not sure what other data to provide.

 

If this card/chipset will not support this, what will?


I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

ULP enable/disable utility. Where to get?

$
0
0

Same issue as seen elsewhere with the i218-V on a dozen Lenovo e550. Rather than going to two remote locations and popping the cmos on the lot, I'd like to give the ULP enable/disable utility a try.

I211 Gigabit Network adapter shows as Removable Device

$
0
0

I'm using a Gigabyte AX370A motherboard with an integrated Intel GbE LAN with Windows 10 64 bit.

 

The adapter works, but its appears in the system tray as a removable device. Its also listed under 'Unspecified' in the Devices and Printers page. I installed the drivers for it from Gigabyte's site first, then tried the latest release directly from Intel - no change.

Lanconf crossover MDI-X

$
0
0

Hi All,

 

I was trying to test 100base T in Mdix mode,however Lanconf doesn't support MDI-x directly,looks like phy register has to be modified in order to get signal in mdix mode.

 

Does anybody know how to make these changes.?

 

Regards

bala

x710 firmware update

$
0
0

I have trouble updating firmware for X710DA4 card.

This card drops connection at random with Linux 4.9.9 driver.

The driver requires firmware update.  But I tried all three versions

at Intel website and all said update not available.

Any help would be much appreciated.

 

 

ethtool -i ens4f1

driver: i40e

version: 2.0.19

firmware-version: 4.10 0x800011c5 0.0.0

expansion-rom-version:

bus-info: 0000:02:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

lspci does not show serial number:

 

 

Capabilities: [e0] Vital Product Data

                Product Name: XL710 40GbE Controller

                Read-only fields:

                        [PN] Part number:

                        [EC] Engineering changes:

                        [FG] Unknown:

                        [LC] Unknown:

                        [MN] Manufacture ID:

                        [PG] Unknown:

                        [SN] Serial number:

                        [V0] Vendor specific:

                        [RV] Reserved: checksum good, 0 byte(s) reserved

                Read/write fields:

                        [V1] Vendor specific:

                End

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi,

I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.

I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity returns is EBUSY, but also sometimes it returns ENOSPC.

More investigation on the topic showed whenever EBUSY was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC, it takes several reboots for the problem to disappear.

In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.


X710 MAC Filtering

$
0
0

Hi,

 

I have a system with an Intel X710 (supermicro onboard card).

 

There is a somewhat unusual configuration on this system, but this does work everywhere else I've seen (including other intel cards on other systems):

 

I have an interface, with two vlans, eg:

eth0.10

eth0.11

 

If I configure eth0.10 with a specific mac address, it appears that something happens in the driver/hardware level that prevents the network card from receiving (tcpdump does not see them) packets with that source mac address, even when they are coming in vlan11

 

Running the latest driver (not the OS-included one): 1.6.42

OS is Ubuntu Linux, 14.04.5 with 4.4 kernel    

 

 

Any idea how to disable this behavior?

 

Thanks,

-Adam

Issues with HPE 562SFP+ in Linux

$
0
0

Hi,
We are seeing some strange issues with this card.
We are running Debian 8.7.


I know that this is an HPE branded card, but as far as I know this card should use the i40e driver under Linux.
I compiled the latest driver 2.0.19 found at sourceforge.

 

 

So the issues we are seeing..
First of all when the driver is loaded correctly and the interfaces turn up we do not get any traffic between the Debian server and the switch.
We don't even see the mac address on the switch in the remote end.

 

Secondly..
Sometimes when we boot the server the interface does not show at all, and some times only of them.I have noticed this error in kernel messages when this happen:

1.856937] i40e 0000:08:00.1: Initial pf_reset failed: -15

[    1.856939] i40e 0000:08:00.1: previous errors forcing module to load in debug mode

 

So what could be wrong? Are we using the wrong driver..
One of the problems here is that HPE does not provide any official drivers for Debian/Ubuntu....

 

 

Thanks

 

/Jo Christian

Pci-pasthrough and SRIOV on the same host

$
0
0

Hi,

 

Can i use PCI-Passthru and SRIOV on the same hosts for different ports? ens1f0-1 SRIOV with multiple VFs and ens2f0-1 pci-passthru ?

 

I have 2 apps on the same hosts one supports SRIOV and the other one only supports pci-passthru.

 

thanks,

Where to download NVM update 5.51 for XXV710 ?

$
0
0

The link to latest is 5.05, looking for 5.51 recommended for XVV710.

Re: VLAN adapter not working after reboot - Win10, 22.0.1

$
0
0

Hi all,

 

Same problem here.

 

One tagged VLAN (6) and untagged VLAN. After every reboot the untagged VLAN doesn't work until I disable and re-enable the Untagged VLAN virtual adapter. The VLAN6 doesn't present this problem and can ping other hosts in the same VLAN.

NIC is connected to a Cisco switch with the following config:

 

nwcore2#sh run | s 2/0/39

interface GigabitEthernet2/0/39

description AURELIO (0B39D)

switchport trunk allowed vlan 1,6

switchport mode trunk

spanning-tree portfast trunk

spanning-tree bpduguard enable

 

 

Let me know if you need any information to try to detect the issue. As ziesemer I have no dump as the system doesn't crash.

 

Best regards

Aurelio Llorente

x710 SR-IOV problems

$
0
0

Hi all,

 

I have following baseline:

 

Dell R630 (2x14 core Xeon, 128GB RAM, 800GB SSD)

x710 4-port NIC, in 10Gbit mode

SUSE12SP1

Latest NIC firmware but default PF/VF drivers (came with OS, v1,3,4)

VF driver blacklisted on hypervisor

Setup according to official Intel and Suse documentation, KVM hypervisor

 

With test setup, single VM with single VF and untagged traffic, I could achieve basically line-rate numbers: with MTU 1500, there were about 770Kpps and BW of 9.4Gbps, achieved both for UDP and TCP traffic, with no packet drops. There is plenty of processing power, setup is nice and tidy and everything works as it should.

 

Production setup is a bit different: VM is using 3 VFs, one for each PF (4th PF is not being used). All VFs except first one use untagged traffic. First VF is passing two types of traffic: first one untagged (VLAN 119) and second one tagged (VLAN 1108). Tagging is done inside VM. Setup worked fine for some time, confirming test setup numbers. However, after some time following errors started to appear in hypervisor logs:

 

Mar  11 14:32:52 test_machine1 kernel: [10423.889924] i40e 0000:01:00.1: TXdriverissuedetected on VF 0

Mar  11 14:32:52 test_machine1 kernel: [10423.889925] i40e 0000:01:00.1: Too many MDD events on VF 0, disabled

 

And performance numbers became erratic: sometimes it worked perfectly, sometimes it did not. But most importantly, packet drops occured.

 

So, I've reinstalled everything (hypevisor and VMs), configured exactly as before using automated tools, but upgraded PF and VF drivers to latest ones (v2.0.19/v2.0.16). Errors in logs disappeared, but issue persists. Now I have this in logs:

 

2017-03-12T11:33:43.356014+01:00 test_machine1 kernel: [  420.439112] i40e 0000:01:00.1: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:43.376009+01:00 test_machine1 kernel: [  420.459168] i40e 0000:01:00.0: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:44.352009+01:00 test_machine1 kernel: [  421.435124] i40e 0000:01:00.2: Unable to add VLAN filter 0 for VF 0, error -22

 

I've increased VM CPU count number, VF ring sizes, turnet off VF spoofcheck in hypervisor, VM linux software buffers, VM netdev.budget kernel parameter (amount of CPU time assinged for NIC processing) etc. but situation remains the same. Sometimes works perfectly, other time it does not.

 

Can you please provide some insight? Since rx_dropped counter is increasing in VM, I am suspecting driver/VF issue.

Is there a way to handle this problem, without switching to untagged traffic?

 

 

 

Thank you in advance,

Ante

Intel processors and 40Gbps NICs

$
0
0

I am trying to achieve a 40Gbps, single TCP flow, throughput out of a40Gbps NIC. To prevent any disk latency, I have created a RAM disk of 80GB on the server and on the client. Then I have created a 40GB file in this RAM disk that is also the root of my ftp server. When I do the download of this 40GB file on the client, the speed never goes over about 12Gbps while I would expect to saturate the NIC at 40Gbps. Doing some further analysis, I have done a copy of the file residing in the RAM disk to the same RAM disk, only changing the name. The result is that it takes almost the same time it takes to send the file via the 40Gbps NIC to the client. My conclusion was that the memory sub-system can not write in such speed. However, if I start a second download at the same time, pinning the download to a different CPU, then the speed just doubles. And if I start a third download it triples.

 

Based on this observation, I believe that the problem is that every core on a CPU must have a defined memory access time slot to prevent starvation of memory access and also to prevent a single core to "block" the entire system from working properly. However, this memory slot is not long enough to produce the 40Gbps that I would like to have on a single stream. To test my theory, I have removed the second system CPU (socket) and disabled all cores, except 2 ones. I did the same test and found that the in memory file copy would gain over 40% speed.

 

All this said, I "think" that the best way to achieve a very high throughput in such scenario should be by disabling as much as possible all the cores so the remaining ones would have a longer time slot to write on RAM.

 

My questions: Am I my missing something or this is the way it is? I believe that the time slot control for memory bus access is strictly controlled by Intel CPU. If it is not, can you give me some hints on how to tailor it?

 

Thanks in advance for your feedback.

Moacir


What's the difference between using X710-DA4 or XL710-QDA1

$
0
0
  • We have been using the X710 and designers have been investing time in this card and driver
  • If we wanted to look at the XL710 would it be transparent,  same driver and all?  or not?

L2 VM using SRIOV with 82599ES

$
0
0

Hi,

 

I am trying to run a VM that wont have an ip address but instead will work in transparent mode (like a cable). I have a PF that has 7 VFs on it and some of these VFs are part of VLAN x and some VLAN y. Here are my issues:

 

1. I see that the bcast traffic is sent to all VFs regardless, can I change it so that only bcast for the related VLAN arrives on th VF?

2. As the traffic that I generate has a different dest IP/MAC than my transparent VM, the VF should be in promiscuous mode. Is it enough to set "trust on" with ip link?

 

What I am trying to do is, to pass the traffic from VM1 to VM2 over the transparent VM.

VM1 --  TransparentVM -- VM2

 

Thanks,

i354 LLDP packets not received

$
0
0

Hi,

 

I have a Linux station with an Intel i354 ethernet controller. I'm trying to use LLDP for discovery between these stations, but it does not seem to work. If I use other interfaces, with different NICs, the LLDP utilities work ok. Could someone give me some hints on what might be wrong?

I210 Adapter restart triggers increased performance

$
0
0

I have a performance dilemma with the Windows 10 software/driver for an onboard I210 series LAN chipset.  I have two ASUS motherboards, Rampage IV – 82579V and P10S WS – I210, connected to a network.  The P10S WS is the server and the other is used as desktop.  With just these two components connected, I get about 85-90MB/s performance from a normal boot.  But, if I go into the P10S WS and change anything in the advanced configuration of the adapter, which makes the adapter restart, then my performance is 111-113MB/s (which is really good). After booting the P10S WS, I have to change something (i.e. speed from Auto to 1GB full duplex, even though it is running 1GB anyway) to get it to restart and run full speed.  Any help would be appreciated.  Running Intel latest drivers.

ULP enable/disable utility. Where to get?

$
0
0

Same issue as seen elsewhere with the i218-V on a dozen Lenovo e550. Rather than going to two remote locations and popping the cmos on the lot, I'd like to give the ULP enable/disable utility a try.

Viewing all 4405 articles
Browse latest View live