Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Can a PCI-E X2 width link be supported by X550-AT2?

$
0
0

Hi,all!

 

We're planning a product with a 10 GBE Interface.

We want to use X550-AT2 as the controller.  

The CPU in the system is a Q-Seven module.

The link between X550-AT2 and CPU is PCI-E Gen3。

In the datasheet of X550, the PCI-E width is described as x1, x4 or x8。But in our system, we can only use a x2 width。

 

I wonder whether the x2 width can be supported by X550-AT2? Or it can only be recognized as a x1 width?

 

thanks sincerely.


intel ANS problem win 1803

$
0
0

Is possible that after 2 weeks intel ANS driver is still broken in win 1803? How many time i need to wait for this?

I can create the network LAG in LACP mode but i cant enable the interface. Is there any solution?

Does Intel® Ethernet Converged Network Adapter X520-SR2 have CDR circuit?

$
0
0

I am checking whether my Intel adapter X520-SR2 have CDR circuit or not. I plug my SFP+ SR  and observed errors. I need to identify whether my X520-SR2 have CDR circuit or not and find out the impact.

 

I could not find this on product brief.

 

Intel® Ethernet Converged Network Adapter X520 Product Brief

 

Many thx in advance to Intel support. Please help me out.

 

@

INTELLAN.mib trouble

$
0
0

I've installed the driver PRO2K3XP_32 on HP Proliant ML350 Win 2003 Server, but the Intellan tree 1.3.6.1.4.1.343 is empty. How may I show the SNMP Lan Status?

xl pci-detach failure when using 82599 NIC

$
0
0

Hi,

     I’ve come across a need to hotplug PCI devices between dom0 and domU using SR-IOV NIC. But I'm experiencing problems when trying to detach VF more than one PV guests.

I can attach VF  to DomU successful as follow:

# xl pci-assignable-list

0000:05:10.0

0000:05:10.2

# xl pci-attach 1 05:10.0

# xl pci-attach 2 05:10.2

But when I can't detach VF and it report errors as follow:

# xl pci-detach 2 05:10.2

libxl: error: libxl_device.c:1269:libxl__wait_for_backend: Backend /local/domain/0/backend/pci/2/0 not ready

And only 05:10.0 can detach successful :

# xl pci-detach 1 05:10.0

# xl pci-assignable-list

0000:05:10.0

Each guest config is the same like this:

name = "ubuntu-pv-1"

bootloader = "pygrub"

memory = 256

vcpus = 1

vif = [ 'bridge=xenbr0' ]

disk = [ 'file:/home/ye/ubuntu-pv/ubuntu-pv-1/ubuntu-pv-1.img,xvda,rw' ]

pci_permissive = 1

Follow xl dmesg log:

# xl dmesg | grep -i vt-d | grep -i enable

(XEN) Intel VT-d Snoop Control enabled.

(XEN) Intel VT-d Dom0 DMA Passthrough enabled.

(XEN) Intel VT-d Queued Invalidation enabled.

(XEN) Intel VT-d Interrupt Remapping enabled.

(XEN) Intel VT-d Shared EPT tables enabled.

# xl dmesg | grep "I/O virt"

(XEN) I/O virtualisation enabled

And libxl-driver log:

# cat  /var/log/libvirt/libxl/libxl-driver.log

xc: detail: sysctl operation failed -- need to rebuild the user-space tool set?

libxl: error: libxl.c:4364:libxl_get_physinfo: getting physinfo: Permission denied

xc: debug: hypercall buffer: total allocations:7 total releases:7

xc: debug: hypercall buffer: current allocations:0 maximum allocations:1

xc: debug: hypercall buffer: cache current size:1

xc: debug: hypercall buffer: cache hits:6 misses:1 toobig:0

I'm running in Ubuntu 14 using Xen-4.6. I have tested in other machine using the same environment but encountering the same problem.

IGB/E1000 : Watchdog bite/ crash while network performance measurement test.

$
0
0

Hi,

    I am using network performance measurement tool nuttcp with IGB and E1000 cards on one of my development boards.

 

Issue: Observed crash/watchdog bite using testing with both IGB and E1000 cards.

Kernel: kernel_msm-3.18

 

Steps to reproduce.

 

1.) On device:  ./nuttcp-8.1.4.arm -S

 

2.) On PC side run below command xxx.xx.xxx.xxx -> IP address of device.

./nuttcp-8.1.4.x86 -w2m -u -R 160M -i 1 -T 1m xxx.xx.xxx.xxx

 

3) After a couple of iterations we see the crash reported in the log.

 

IGB  Log

------------------

Parsing debug information for MSM_DUMP_DATA_CPU_CTX. Version: 20 Magic: 42445953 Source:

Parsing CPU1 context start 171c8a800 end 171c8b000

Core 1 PC: arch_counter_get_cntvct+1c <ffffffc000a25164>

Core 1 LR: arch_counter_get_cntvct+1c <ffffffc000a25164>

 

[<ffffffc000a25164>] arch_counter_get_cntvct+0x1c

[<ffffffc000352ca0>] __delay+0x24

[<ffffffc000352c74>] __const_udelay+0x24

[<ffffffc00049235c>] msm_trigger_wdog_bite+0xd0

[<ffffffc0000f1d0c>] spin_bug+0x94

[<ffffffc0000f1e7c>] do_raw_spin_lock+0x104

[<ffffffc000eb38a0>] _raw_spin_lock+0x28

[<ffffffc000710768>] igb_get_stats64+0x30

[<ffffffc000cb8244>] dev_get_stats+0x4c

[<ffffffc000d249f8>] iface_stat_fmt_proc_show+0x98

[<ffffffc0001e77e0>] seq_read+0x18c

[<ffffffc000222874>] proc_reg_read+0x8c

[<ffffffc0001c5fcc>] vfs_read+0xa0

[<ffffffc0001c6758>] SyS_read+0x58

[<ffffffc0000864b0>] el0_svc_naked+0x24

 

From the code, it looks like it could be stuck in igb_update_stats and so the unlock might not be happening in time.

 

Code file:/kernel_msm-3.18/kernel/drivers/net/ethernet/intel/igb/igb_main.c

 

5160static struct rtnl_link_stats64 *igb_get_stats64(struct net_device *netdev,

5161 struct rtnl_link_stats64 *stats)

5162{

5163 struct igb_adapter *adapter = netdev_priv(netdev);

5164

5165 spin_lock(&adapter->stats64_lock);

5166 igb_update_stats(adapter, &adapter->stats64);

5167 memcpy(stats, &adapter->stats64, sizeof(*stats));

5168 spin_unlock(&adapter->stats64_lock);

5169

5170 return stats;

 

 

E1000 Log

-------------

 

[  100.703446] init: Service 'atfwd' (pid 765) exited with status 255

[  100.708673] init: Service 'atfwd' (pid 765) killing any children in process group

[  104.343224] init: Untracked pid 2636 exited with status 0

[  133.202677] BUG: spinlock lockup suspected on CPU#0, kworker/0:3/986

[  133.208030]  lock: iface_stat_list_lock+0x0/0x18, .magic: dead4ead, .owner: NetworkStats/1294, .owner_cpu: 1

[  133.217936] Causing a watchdog bite!

[  133.345512] Backtrace for cpu 1 (current):

[  133.348758] CPU: 1 PID: 1294 Comm: NetworkStats Tainted: G        W      3.18.31-g12d3836-dirty #2

[  133.357697] Hardware name: Qualcomm Technologies, Inc. APQ8096v3 + PMI8994 DragonBoard (DT)

[  133.366028] Call trace:

[  133.368473] [<ffffffc000089d9c>] dump_backtrace+0x0/0x278

[  133.373843] [<ffffffc00008a034>] show_stack+0x20/0x28

[  133.378884] [<ffffffc000e70b08>] dump_stack+0x9c/0xd4

[  133.383914] [<ffffffc000093700>] arch_trigger_all_cpu_backtrace+0x6c/0xdc

[  133.390687] [<ffffffc0000f1e80>] do_raw_spin_lock+0x108/0x160

[  133.396415] [<ffffffc000e7f428>] _raw_spin_lock+0x28/0x34

[  133.401800] [<ffffffc0006d2de0>] e1000e_get_stats64+0x44/0x118

[  133.407614] [<ffffffc000c83140>] dev_get_stats+0x4c/0xac

[  133.412907] [<ffffffc000cef8f4>] iface_stat_fmt_proc_show+0x98/0x198

[  133.419243] [<ffffffc0001e77e0>] seq_read+0x18c/0x3b4

[  133.424277] [<ffffffc000222874>] proc_reg_read+0x8c/0xb4

[  133.429571] [<ffffffc0001c5fcc>] vfs_read+0xa0/0x14c

[  133.434520] [<ffffffc0001c6758>] SyS_read+0x58/0x94

 

 

cheers,

mohit

Intel X710 R630 with i40en driver on vmware problem

$
0
0

Hi there,

 

we are experiencing problems with the X710 series R630 Intel NIC built-in within our Dell server (running ESXi version 6.5.0 / build number 7388607).

As for the i40en driver version:

 

   Driver Info:

         Bus Info: 0000:01:00:0

         Driver: i40en

         Firmware Version: 6.00 0x800034ef 18.3.6

         Version: 1.4.3

 

Just yesterday we had another outage with a lot of upset customer calls, so this issue is rather urgent. I have seen the blog entry for a i40en driver update for 6.7 installations, but that doesn't help our specific case.

Can you at least give us an ETA for the driver update for 6.5 installations? As mentioned above, it's pretty urgent as we can not have any more outages like yesterday. Please advise.

 

Thank you in advance!

 

Best

Tommy

Installing driver 23.2 freezes Windows 10

$
0
0

When I try to install the newest NIC driver 23.2 on my Windows 1803 X64 system the system freezes just after the installation start. What goes wrong ?


I350-T2 + I219V teaming in Win 10 build 14393.2214

$
0
0

I also have teaming problem with I350-T2 + I219V

I use win10 64-bit with build 14393.2214 (released at 17th, April) and the latest driver (Download Intel® Network Adapter Driver for Windows® 10 )

I can successfully teaming these two adapters(using intel ANS adaptive load balancing), and it works fine in normal situation.

BUT, its fail-over function doesn't work at all.

Here is how I test fail-over function:

I connect with 3 client pc and use Iperf2.0.10 to test the connection speed.

Then, I intentionally disable one adapter, theoretically, it will continue connecting with all 3 pc with a lower speed(which I saw under win8), but the result I see is connection abort (sometimes one or two of them, sometimes all) with two kinds of error message : "write failed, connection reset by peer" or "software cause connection abort".

This problem didn't happen under win8 I test couple weeks ago.

Does anyone have any idea?

network driver 219-V

$
0
0

Iam so dissappointed i left the stability of qualcomm network driver.

 

I have been running a KF2 Killing

floor 2 server for years.

 

Just recently i retired the gigabyte board and went with asus and the intel network drivers i have heard so much about!

 

Its been 4 mnths of torture trying to sort my high lag spike problem!!

 

Ping Spikes into the thousands every 5th or so refresh of server in game browser!!!

 

Yet when i start server on my old giga board the ping is steady!!

 

I want to break bones !!

 

How on earth do you release faulty drivers!!!

 

I finally thought i better atleast try ask and look for solutions in here , god knows ive looked everywhere!!!

 

NOT HAPPY.. intel LOLOL

 

 

i had better add... z370 asus rog strix i gaming crap, latest drivers and all dated drivers all bios now the latest .. attachment shows my 2 servers , ping spikes when running one server also.

Intel Pro 1000PT Quad Port Teaming

$
0
0

Hey!

 

So I bought two of these NICs and a Dell PowerConnect 2816 to connect two computers with up to a 4Gigabit connection when teamed. The issue is, I can't seem to get the drivers to install properly in windows. ProSet seems to instal for my motherboard built in NIC, which is also INTEL Gigabit-but I dont wan't to team that-I just want a team of the 4 ports on the add in card. According to every single thing I have seen on google, there should be a teaming tab when I click "configure" on one of the Pro 1000PT connections, but it doesn't seem to exist anywhere, and Windows installs the drivers for the device automatically. I have tried installing the latest drivers, but again, it just shows the default Windows drivers installed. Help!

 

Thanks so much,

Rowan

Intel(R) Ethernet Connection I217-LM does not work on 1Gbps speed

$
0
0

hello,

 

I have a HP ZBook 15 laptop, where the "Intel(R) Ethernet Connection I217-LM" device is used for wired Ethernet. I also have two GO-SW-8G Dlink Gigabit switch (https://eu.dlink.com/se/sv/products/go-sw-8g-8-port-gigabit-dlinkgo-switch ). Below, I will call these SW1 and SW2, for better understanding. Also from the Internet Service Provider, we have 1 Gigabit/sec bandwidth (T-Home Magenta1 pack).

 

There are two scenarios what I setup and the 1Gbps speed was not able to work (?) in the laptop in one case.

 

The scenario-1, when the 1G speed was working:

The internet modem is connected with a standard LAN cable to the PORT-1 in of the SW1. The Ethernet interface in my laptop is connected to the PORT-2 in SW1 with a standard LAN cable. In this case the Ethernet interface LinkSpeed was Gigabit. It was possible to see both in the indicator LED of the PORT-2 in SW1 -this was green-, and also in the Windows when checking the LinkSpeed of the Ethernet interface, please refer to this link, Google Photos

 

So, in Gigabit speed point of view there is no any issue in this scenario.

 

The scenario-2, when the 1G speed was not working, and the LinkSpeed down set to 100Mbps:

Just similar to scenario-1, the internet modem is connected with a standard LAN cable to the PORT-1 in of the SW1. Now SW2 comes to the path, thus PORT-2 in SW1 is connected to PORT-1 in SW2 with a standard LAN cable. The Ethernet interface in my laptop is connected to the PORT-2 in SW2 with a standard LAN cable. In this scenario, the indicator LEDs on the Dlink switches are green, meaning 1G speed in all used ports, except the PORT-2 in SW2 which goes to the laptop. The indicator LED was orange of this port, also checking the LinkSpeed in the Windows side, that shows 100Mbps speed, please refer to this link, Google Photos

 

I have tried set manually the LinkSpeed in the Windows for the Ethernet interface to 1G, but in this case the interface was "disconnected", so the wired Ethernet was gone in the list of  connections in Windows and the laptop has started to use my WIFI NW.

When I changed back the interface LinkSpeed to "Auto", that come back to 100Mbps, and the "internet connection" returned to use the wired connection.

 

I did other test by using another Gigabit device, what was a Gigabit router from Dlink, and when I was connect one of the LAN interface of the router to the PORT-2 (or 3, or so) in SW2, the indicator LED was green, meaning the speed was 1Gbps.

 

Last week, I was issued a support ticket to Dlink, but finally they have not seen any issue with my setup in their pint of view, and also got the info from them, the scenario-2 was working very well with 1G in their LAB.

Thus now, I think the problem is not with the Dlink switches, but with the Windows and/or I217-LM device, and I would like to get some instructions, hints, ect from you, about doing some kind of troubleshooting to identify the root of the issue I have.

 

This morning I just refreshed the driver of the I217-LM to "Intel ® Network Connections Version:"22.9.6.0"", but I got the same result, so the expected 1Gbps LinkSpeed is not seen when both Dlink switches are in the path.

For more detail info about the drivers on my laptop, please refer to the attached text file.

 

Please try help me.

 

Thanks for your help,

/Robi

Cannot PXEboot with Intel X710 NIC (pxe structure was not found in UNDI driver code segment)

$
0
0

Hello, I've been having trouble with this NIC not booting off of PXE.  I've already set the 10Gbe interface as a booting interface in the BIOS.  During PXE, I get the error message: PXE structure was not found in the UNDI driver code segment.  When it first didn't work, I used nvmupdate64e to update the NIC firmware.  nvmupdate now tells me my firmware is up to date (at version 6.01).  However, I still get the same exact error message at PXE boot.  I have also tried enabling PXE flash firmware for the interfaces.  But I still get the same error after all these driver installs and configuration enables. 

 

lspci -vv | grep -i 'Intel Corporation Ethernet Controller X710'

81:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

81:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

 

./bootutil64e

Port     Network Address     Location     Series     WOL     Flash Firmware          Version

3          6805CA385DF0      129:00.0     40GbE    N/A       UEFI,PXE Enabled     1.0.66

4          6805CA385DF1      129:00.1     40GbE    N/A       UEFI,PXE Enabled     1.0.66

 

ethtool -i eth2

driver: i40e

version: 1.5.10-k

firmware-version: 6.01 0x80003483 1.1747.0

bus-info: 0000:81:00.0

 

./nvmupdate64e

Num      Description                                                       Ver.      DevId     S:B          Status

01          Intel (R) I350 Gigabit Network Connection      1.99      1521      00:0001   Update not available

02          Intel(R) Ethernet Converged Network             6.01      1572      00:129     Up to date

              Adapter X710-2

 

Any idea what I need to do to get this interface to PXEboot?  eth2 is the interface that I am using.

intel 82579LM windows 10 1709 VLANS problem

$
0
0

I have intel 82579LM network card in my DELL M6700. Windows version 1709 16299.309. Intel PROSet Version: 23.1.100.0. Intel Driver: 12.15.31.4. When I creating VLAN the error occured:

 

"One or more VLANs could not be created. Please check the adapter status and try again."

 

Can anybody help me. Thanks.

I210 Teaming after Windows 10 April 2018 Update

$
0
0

I had a pair of I210 NICs (a dual-NIC motherboard) teamed via Static Link Aggregation and all was working fine until today when Windows 10 x64 decided to apply the April 2018 cumulative update. The page providing the respective drivers with ANS support (https://downloadcenter.intel.com/download/25016/Intel-Network-Adapter-Driver-for-Windows-10?product=64399) states that the latest version is 23.1 (dated 2018-02-21). Even with this latest driver, however, I can re-create the team, but it cannot be enabled. The only way to keep things in a sort of working state is to disable one of the NICs as well as the virtual team interface and keep only the other NIC active (i.e., not an ideal scenario).

 

Is there any chance that a new version of the driver will be available soon?

 

Thanks.

 

nvx


Intel l211 NIC + Windows 10 Pro 1803 = Broken TCP Checksum Offload?

$
0
0

Hello everyone,

 

My desktop motherboard uses an Intel l211 network adapter and on Windows Pro 1803 (with the latest 23.2 driver, but also with 23.1) I've noticed that TCP Checksum Offload cannot be enabled at all via Device Manager -> Advanced Adapter Settings or Windows PowerShell.

 

On Advanced Adapter Settings, the options seem to be set to Disabled by default, but once I set them to something else and apply the settings, the window is closed, connection is lost for a few seconds, and once I open the Settings page once again, the options are back to Disabled.

7z7jxyv7rz011.png

Then, I tried using Enable-NetAdapterChecksumOffload on Windows Powershell, but when I try reading the new values with Get-NetAdapterChecksumOffload, I see everything set to Disabled.

fxniunu9rz011.png

 

Any ideas?

 

Thanks!

Questions about Intel 82599: Flow director + NUMA + performance

$
0
0

Hi there, how are you?

 

We're trying to get the max possible performance (throughput) out of our servers. Here are some context followed by the main changes we did and then a lot of questions in regard of Intel 82599 and linux parameters, please let us know if we miss any part or ask if you need any further clarification. (like buffer/queue sizes, ring buffer, qdisc or rcv/send buffer)

 

Context:

  • Goal: to get the max throughput through packet locality (latency isn't to bother under 0.5s)
  • Load: mostly video (streaming) chunks, ranging from 170KB to 2.3MB
  • App (user land): nginx (multi process pined by each core)
  • OS (kernel): RHEL 7.4 (3.10)
  • NIC (driver): Intel(R) 82599 10 Gigabit Dual Port Network Connection (rev 01) (ixgbe - 5.3.7)
  • Bonding: Bonding Mode: IEEE 802.3ad Dynamic link aggregation (it's a single card with two inputs 10Gbps each giving us 20Gbps)
  • HW: CPU=Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz, Hyper Thread=off, CPU's socket=2, CPU's core: 12, 64GB RAM
  • NUMA layout:

available: 2 nodes (0-1)

node 0 cpus: 0 1 2 3 4 5

node 0 size: 32605 MB

node 0 free: 30680 MB

node 1 cpus: 6 7 8 9 10 11

node 1 size: 32767 MB

node 1 free: 30952 MB

node distances:

node   0   1

  0:  10  20

  1:  20  10

What we did:

  • Install the latest driver (ixgbe - 5.3.7)
  • Run set_irq_affinity -x local ethX with `x` option (enabling RSS and XPS)  and for local NUMA
  • Enabled Flow director: ntuple-filters on
  • Set affinity for our user land application (nginx's worker_cpu_affinity auto)
  • XPS seems to be enabled (cat /sys/class/net/eth0/queues/tx-0/xps_cpus)
  • RSS seems to be working (cat /proc/interrupts | grep eth)
  • RFS seems to be disabled (cat /proc/sys/net/core/rps_sock_flow_entries; shows 0)
  • RPS seems to be disabled (cat /sys/class/net/eth3/queues/rx-10/rps_cpus ; show 00000,00000 for all queues)

Questions:

  1. Do we need to enable RPS for the (HW acc) RSS to work? (when we check /sys/class/net/eth0/queues/rx-0/rps_cpus it does has 00000000,00000000 for all the queues)
  2. Do we need to enable RFS for the Flow Director to work? (cat /proc/sys/net/core/rps_sock_flow_entries; shows 0)
  3. Do we need to add any rule for Flow Director to work? (on TCP4 case, since we can't see any explicit rule we supposed it uses the perfect hash (src and dst ip and port))
  4. How can we be sure that RSS and flow director are working properly?
  5. Why can't we use the most modern QDisc for this multiple queue driver/NIC? (like fq or fq_codel we tried to set up with sysctl net.core.default_qdisc, is it because of multiple queues?)
  6. Does a single NIC  only connects directly to a single NUMA node? (when we run set_irq_affinity -x local ethX it set all the queues to the first NUMA node)
  7. If 6) is true then what's better for throughput: to pin the NIC to a single NUMA node or to spread the multiple queues into all the nodes?
  8. Still if 6) is true then if we buy a second NIC card are we able to make it connected to the second NUMA node?
  9. We tried to set coalescence for the TX ring buffer (ethtool -C eth3 tx-usecs 84) it just ignored our value, isn't possible to set coalescence for TX ring buffer?
  10. Should we enable HT but use as few queues as real cpus/cores?

 

If you read until here, thank you very much

 

References:

i40evf maximum RSS queues reasoning

$
0
0

I was looking through the i40evf source code, and it seems like the maximum number of queues supported is 4. Is there a reasoning for this? Is there a way to use more, such as 16?

 

I've seen patches proposing removing a few variables (MAX_QUEUES, which sets the max to 16, but doesn't actually serve a purpose since the RX queues limit is set to 4).

Server 2016 drivers for Intel Ethernet Connection I218-V (NIC)

$
0
0

I've searched but cannot see Server 2016 drivers for this model NIC. (Note they so show drivers under Server 2016 for I218-LM, but these don't work. I tried and they were not recognised by the drivers install program.)

 

This NIC is on an Intel NUC model.

 

Does anyone know if they exist somewhere, or other alternatives?

 

Geoff.

Intel Ethernet Connection X722 LOM, dcbx and ESXi 6.5

$
0
0

My Environment:

Server: Fujitsu PRIMERGY RX2530 M4

Ethernet Adapters: Intel Ethernet Connection X722 LOM (Two SFP+ ports)

Ethernet Adapters Firmware: 3.51

--

ESXi: 6.5 U2

i40en driver version: 1.5.8

--

Both ethernet ports are connected to Juniper EX4550 Virtual Chassis (Junos version 12.3R9.4).

Switch ports are in trunk mode with native vlan.

--

After server boot up ESXi show only one X722 port which is online and port pass traffic. When I assign vlan tag to this interface (also changing ip addresses) then port is online but no traffic.

 

By default dcbx protocol is enabled on all switch ports:

set protocols dcbx interface all

 

I disabled dcbx protocol on ports where ESXi server is connected:

set protocols dcbx interface xe-0/0/15.0 disable

set protocols dcbx interface xe-1/0/15.0 disable

 

 

After server reboot ESXi show two X722 ports which are online and vlan tag assignment is working.

 

Is this the i40en driver problem?

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>