Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Intel Ethernet Connection I218-LM 1Gbps speed problem

$
0
0

Hi everyone,

 

a lot of similar problems could be found accross forums and internet regarding Intel(R) Ethernet Connection I218-LM or similar network cards having problem connectiong to a gigabit network. Max speed of connections is 100Mbps.

I will explain problem in details and hopefully Intel personnel will help us

 

system/network

Lenovo ThinkPad T440s

Intel(R) Ethernet Connection I218-LM

Drivers: latest (tried Intel and Lenovo drivers)

Cabling: Cat5e (all cables)

Switch: TPLink TL-SG1024

 

To eliminate all "external" problems it is not due bad cables nor switch nor any other variables.

Connecting other notebook to same RJ45 outlet with same cable normally gives 1Gbps (different network card producer). So we came to conclusion it has to be the problem with I218-LM.

 

Behavior:

-When you connect cable to RJ45 at the nootebook lan card obviously trying to negotiate 1Gbps speed what takes around 30-60sec when lan icon shows no connection. After that, connection has been estanblished but at the speed of 100Mbps.

-Forcing properties (speed&duplex) from auto negotiate to 1Gbps leads to no connection at all after cable has been connected to RJ45 at the notebook.

-Forcing properties (speed&duplex) from auto negotiate to 100Mbps leads to instant connection after cable has been connected to RJ45 at the notebook.

 

For some reasong 1Gbps speed can not be achieved and I kindly ask Intel personnel so investigate the problem. A lot of same issue has been around with same lan card or similar ones so I believe a lot of customers will have benefit of solving this issue.

 

Thanks in advanced


Best driver for intel gigabit 82579v and windows 8.1 (low DPC count)

$
0
0

Hi guys,

 

I'm experiencing bad (distorted) sound (buffer underruns appearing as drop outs, clicks or pops.) when my network usage is average or high. I attached a LatencyMon report.

My motherboard is ASUS P8p67 pro and my network is intel gigabit 82579v

 

I have tried the most recent drivers and a random old windows 7 drivers. Windows 7 drivers seems to work better than the last drivers 20.7 ProWin64 but LatencyMon is still showing medium DPC count.

 

What driver should I use?

Should I uninstall windows 8.1 and go back to windows 7?

If I really want to keep windows 8.1, do I have to buy a new gigabit desktop Ethernet adapter?

 

It seems that I also have problems with usb drivers...

Driver Winpe 5 x86 for HP 840 G3 (Intel I219-V) for landesk

$
0
0

Hello,

 

I need driver Winpe 5 x86 for hp elitebook 840 g3.


I have test with E1D6332.inf it's don't work.


Where can i find the good driver?

 

Thanks,

XL710 poll-mode fix in PF driver incomplete?

$
0
0

We have a polling XL710 VF driver and have found the appropriate poll-mode workaround in the DPDK. We are however not using the DPDK and are relying on the accompanying fix made to the latest Intel PF Linux drivers eg  version 1.3.49. However this fix does not work and we believe it is incomplete. The part we are referring to involves the clearing of the DIS_AUTOMASK_N flag in the GLINT_CTL register. The code in the above release (and earlier ones) is: (i40e_virtchnl_pf.c: 344)

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING) &&

        (vector_id == 0)) {

        reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

            reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;

            wr32(hw, I40E_GLINT_CTL, reg);

        }

We believe this should say:

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING)

       && (vector_id == 1) {

       reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

          reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK |

                    I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK);

           wr32(hw, I40E_GLINT_CTL, reg);

 

        }

    }

With the above changes the fix then works.

The addition of the I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK is as per the datasheet S 8.3.3.1.4.2.

The test for vector_id == 1 is because the default MSIX vector is 1. However there is a good argument for removing this test altogether since the vector involved depends on the VF implementation. Note that the fix in the DPDK eliminates this test.

 

We would appreciate it if you could verify the above and make changes to the released PF driver.

XL710 - BIOS/Initialization Error - The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.

$
0
0

Hi,

We have installed XL710 40GbE B1/Rev 02 NICs in one of the servers. We have upgraded the NIC drivers with latest available i40e driver version i.e. 1.4.25.

We are seeing BIOS error like this and observing the same for every reboot of the machine.

 

  • The driver for the device detected a new version of the NVM Image than expected.  Please install the most recent version of the network driver.   

 

But actually we are using correct and latest version of driver that is 1.4.25This error is not observed in dmesg logs and happening in only at the time of boot time screeNs as part of driver and firmware initialization.

Can someone help us in resolving the issue? Attached dmesg logs from the server as an attachment.

 

We are using Cent OS 7.1 and it is happening on both HP as well as DELL servers.

 

===Snippet from dmesg===========

[    3.431409] i40e 0000:03:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    3.841578] i40e 0000:03:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    7.258924] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    7.798504] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    8.205727] i40e 0000:82:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    8.614473] i40e 0000:82:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

 

03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

03:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

81:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

81:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

82:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 01)

82:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 01)

 

Thanks & Regards,

Suresh.

How to enable TX_TCLK on an INTEL I350-T2 PCIe Server Board?

$
0
0

We are doing our own embedded designs with ethernet controllers. To verify our Keysight ethernet conformance test we are currently using an INTEL I350-T2 PCIe Server Board as reference board. Therefore we need to enable the TX_TCLK pin output. Our Keysight contact told us there would be a tool called EthAgent. Where can i find it? Is there any other way to do that with e.g. the LanConf tool?

Intel Ethernet Connection I219-LM not working with Windows 10 deployment via WDS server

$
0
0

Hello, we have a Dell Latitude E7270 that has the Intel L219-LM Ethernet adapter. I am using a WDS server to deploy a Windows 10 Image onto the computer, but it does not get an IP address. I am able to deploy Windows 7 just fine, but not Windows 10. I loaded the Ethernet Driver version 20.7.1 to the boot.wim but when I boot up over PXE boot to deploy the image, it doesn't get an IP address. If you need clarification, more details or questions, please let me know. Thanks7270.jpg

ixl problems on FreeBSD (XL710)

$
0
0

I'm having some strange issues with ixl(4) and a X710-DA4 card in a new-ish Intel-based server.  I'm pretty much replicating an existing setup from an older AMD machine that used 2 x X520-DA2 cards and ixgbe(4).  This is all on -CURRENT.

 

It's meant to be a bhyve server, so the 4x10GE ports are put into a LACP-based lagg(4), then vlan(4) interfaces are bound to the lagg, and then if_bridge(4) interfaces are created to bind the vlan and tap interfaces together.

 

The X710-DA4 is running the latest NVM from Intel (5.02):

 

 

dev.ixl.3.fw_version: nvm 5.02 etid 80002284 oem 0.0.0

 

dev.ixl.2.fw_version: nvm 5.02 etid 80002284 oem 0.0.0

dev.ixl.1.fw_version: nvm 5.02 etid 80002284 oem 0.0.0

dev.ixl.0.fw_version: nvm 5.02 etid 80002284 oem 0.0.0

 

I've tried both the ixl driver that comes with -CURRENT (1.4.3?) and the 1.4.27 driver from Intel and am having the same problem.  The problem is this exactly (sorry it's taken me so long to get to it!):

 

Using just one interface, one interface + VLANs, the lagg without VLANs, etc, everything works perfectly fine.  As soon as I combine lagg+vlan+bridge, all hell breaks loose.  One machine can ping one alias on the server but not the other while other machines can.  The server itself can't ping the DNS server nor the default route, but can ping things through the default route, etc.  The behavior is very unpredictable.  ssh can take a few times to get in, and then once it, "svn update" will work for a few seconds and then bomb out, etc.  This same config (except using a normal lagg instead of LACP) seems to work on ESXi, so it looks like a driver issue.

 

He is the working config from the X520-DA2 system:

 

 

ifconfig_ix0="-lro -tso -txcsum up"

 

ifconfig_ix1="-lro -tso -txcsum up"

ifconfig_ix2="-lro -tso -txcsum up"

ifconfig_ix3="-lro -tso -txcsum up"

cloned_interfaces="lagg0 tap0 tap1 bridge0 bridge1 vlan1 vlan2"

ifconfig_lagg0="laggproto lacp laggport ix0 laggport ix1 laggport ix2 laggport ix3"

ifconfig_vlan1="vlan 1 vlandev lagg0"

ifconfig_vlan2="vlan 2 vlandev lagg0"

ifconfig_bridge0="inet 192.168.1.100/24 addm vlan1 addm tap0"

ifconfig_bridge1="addm vlan2 addm tap1"

defaultrouter="192.168.1.1"

 

Here is the "broken" config from the X710-DA4 system:

 

 

ifconfig_ixl0="-rxcsum -txcsum -lro -tso -vlanmtu -vlanhwtag -vlanhwfilter -vlanhwtso -vlanhwcsum up"

 

ifconfig_ixl1="-rxcsum -txcsum -lro -tso -vlanmtu -vlanhwtag -vlanhwfilter -vlanhwtso -vlanhwcsum up"

ifconfig_ixl2="-rxcsum -txcsum -lro -tso -vlanmtu -vlanhwtag -vlanhwfilter -vlanhwtso -vlanhwcsum up"

ifconfig_ixl3="-rxcsum -txcsum -lro -tso -vlanmtu -vlanhwtag -vlanhwfilter -vlanhwtso -vlanhwcsum up"

cloned_interfaces="lagg0 tap0 tap1 bridge0 bridge1 vlan1 vlan2"

ifconfig_lagg0="laggproto lacp laggport ixl0 laggport ixl1 laggport ixl2 laggport ixl3"

ifconfig_vlan1="vlan 1 vlandev lagg0"

ifconfig_vlan2="vlan 2 vlandev lagg0"

ifconfig_bridge0="inet 192.168.1.101/24 addm vlan1 addm tap0"

ifconfig_bridge1="addm vlan2 addm tap1"

defaultrouter="192.168.1.1"

 

I've changed the various flags in the ifconfig_ixl# lines without any obvious differences.  Both machines are connected to the same HPe 5820X switch with the same exact config, so I don't believe it's a switch issue.

 

Any ideas? Has anybody seen something like this before?


x710 disconnections

$
0
0

On my servers on Windows 2012 R2, I have a x710 with random disconnection with two scenarios.

 

1- During the windows startup, I see many error in the event viewer related to disconnections .

 

2- Going to the properties of the drivers, cause the nic to disconnect.

 

 

I have all lastest drivers and firmware. On the same server we have Mellanox Connectx 4 without any of these issues.

X700 quad nic - This device cannot start. (Code 10)

$
0
0

Hello,

 

I have a poweredge T630 - just arrived 2 weeks ago.   We put a x700 nic in it.   Win 2012 R2 (fresh install) didn't recognize the card so we downloaded the latest drivers 1.3.115.0 date of the drivers - 3/22/2016 - we downloaded it from Intel today.   The card is then recognized and it showed 4 ports.

 

We tried plugging in a 10g cable and a transceiver on this nice.  In network card properties it went from 4 ports down to two.   So I checked in device manager and 2 of the ports (the ones we plugged in have this message - This device cannot start. (Code 10).

 

I have tried turning off the server and turning it back on.

82599 hardware filter to only accept UDP4 traffic sent to 53 port

$
0
0

Hello,

 

I tested hardware filters on my Ubuntu+82599 development environment and everything seemed to work great. I've further read Intel and ethtool documentation, but I've been unable to find a solution to my next question. I've got a DNS analysis tool and I would like to only accept UDP packets sent to/from 53 port (DNS request/responses) and drop everything else. In your opinion, is by any means possible to implement a hardware filter like this one (drop all non-UDP packets and not sent to 53 port) below:

 

ethtool --config-ntuple eth4 flow-type !udp4 dst-port !53 action -1

 

Thanks in advance and best regards,

Manuel Polonio

Question about teaming in Windows 10?

$
0
0

Hi people,

 

(please be gentle, I am new and this is my first post, so sorry if I offende anyone with my silly question).

 

I have a MSI Big Bang XPower II (x79) motherboard which has dual Intel ethernet ports: Supports dual PCI Express LAN 10/100/1000 Fast Ethernet by Intel 82579V and 82574L.
In Windows 7, 8 and 8.1 I always could team these 2 ports, however with Windows 10 (build 240) this seems to be impossible?his

 

Is it true this is impossible? I tried almost everything already. I have to download the latest Intel drivers, because the default drivers in Windows 10 don't allow teaming. But when I download them and install the drivers, it doesn't work. The teamed connection stays disabled. When I check the team driver it's really, really old...?

 

Anyone experiencing the same with Windows 10? Or anyone has an idea? Or should I just remain patient until Intel releases a working driver for teaming in Windows 10?

 

Thank you in advance.

 

Regards

Connecting an Intel Ethernet XL710-QDA2 to an Intel® Ethernet Converged Network Adapter X520-SR2

$
0
0

Hi

 

I need to connect my server having a  XL710 CNA QDA2 to another server with a X520-SR2 Network adapter(E10G42BFSR ). I was curious whether I can go with the Intel Ethernet QSFP+ breakout cable (X4DACBL1, X4DACBL3 or X4DACBL5) from Intel for this purpose. If not what other option do I have ??   (40GbE to 10GbE)

 

SFP+ Modules, SFP Modules, and Cables Compatible with Intel® Ethernet...

 

Thanks in advance. Please help me out.

 

Regards

 

Anzall

I217 support removed from the UEFI Ethernet Driver

$
0
0

I tried using the 20.7 UEFI Ethernet Driver E7006X3.EFI, but I see that I217 support has been removed.  Why? 

 

I need I217-LM support in this driver.

 

Going back I see the last time it was in the Driver was version 20.0, E6604X3.EFI.

 

Can I217 support please be added back into future builds?

 

Thank You

IBIS model for 82580DB

$
0
0

Hi

 

Has anyone done Signal Integrity simulations for PCIe and SERDES interface for 82580DB using Hyperlynx?

Can anybody point me to the link where I can get the IBIS model for 82580DB?


Slow UEFI boot with I350-T2

$
0
0

Hello,

 

I have had an I350-T2 in this PC for several months now without any trouble. Suddenly today, it started acting up. It now takes about 20 seconds longer to boot (before the POST screen even appears), and I have also noticed that the UEFI boot option list keeps growing. If I start with a list that has only the Windows 10 boot manager and the two onboard NICs in it, after the first boot there are two entries for the I350, after the next boot, three, and so on.

 

My POST code list says that the 20 seconds are spent in "PCI bus initialization"; if I remove the I350, that delay is completely absent.

 

I have tried to use bootutil to disable the boot ROM on the I350, but it had been disabled already and neither enabling it nor disabling it again had any effect (it keeps adding itself to the UEFI list, see above).

 

It appears that if I disable the UEFI network support entirely, it works, but I need to boot from the network occasionally.

 

Any hints, short of a new card?

 

Thanks,

 

--

Christian

Need to downgrade "nvm" on Intel x710-DA2 10G ethernet card

$
0
0

As part of a problem investigation, I update the "nvm" / firmware on a couple of Intel X710-DA2 10G ethernet cards using the NVM Update package (Download NVM Update Utility for Intel® Ethernet Converged Network Adapter XL710 & X710 Series) .  The new firmware did not solve the problem and now I want to go back to the old firmware.

 

As part of the upgrade sequence, I had the software make a backup copy of the firmware. So I have a copy of the firmware, but I see no way to actually put that firmware back into the board.  The NvmUpdateTool does not provide a downgrade / restore old firmware option.   The software made directory with the old firmware (4MBytes labeled <macAddr>.bin and some other data - labeled <macAddr>.flb).

 

Now what?

 

Thanks,

 

Rob Westfall

poor network performance when using X520-DA2

$
0
0

I´ve got a problem with a server connection using the following combination:

 

IBM Server SystemX 3450 Typ:7379-K4G (PCIE x8 slot, 25w), running Windows Server2008R2

Intel Network Adapter X520-DA2

Intel TwinAxial Cable 3m (XDACBL3M)

D-Link switch DGS-1510-52


Problem:

10Gbit Link is up, but the performance is like 10Mbit. Gigabit connection with the internal adapter is much faster!

Even when i try to open the management gui of the adapter it takes several seconds to open and everything seems to be very slow.

As i don´t use them i tried to disable the IPv6 but this didn´t have any effect.


Any ideas?


Thank you in advance! :-)

best regards

Stefan Rindler


Whether we have test report of I210/211 Packet performance

$
0
0

I want to compare the packet performance of I210/211,but i can't find a report to show me about this,do we have test report of I210/211 Packet performance?

SR-IOV failed on Intel Xeon-D 1541's X552 10gbe NIC

$
0
0

Hi all,

 

I cannot figure out why I cannot enable SR-IOV on Intel Xeon-D 1541's X552 10gbe NIC, it must be the intel's latest ixgbe driver issue because on the same SoC board, the Intel i350 1gbe NIC's sr-iov can be enabled.

sr-iov_failed.jpg

Following is the pci device info and also its ixgbe info

root@pve1:/sys/bus/pci/devices/0000:03:00.1# lspci -vnnk -s  03:00.0

03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T [8086:15ad]

         Subsystem: Super Micro Computer Inc Device [15d9:15ad]

         Physical Slot: 0-1

         Flags: bus master, fast devsel, latency 0, IRQ 25

         Memory at fbc00000 (64-bit, prefetchable) [size=2M]

         Memory at fbe04000 (64-bit, prefetchable) [size=16K]

         Expansion ROM at 90100000 [disabled] [size=512K]

         Capabilities: [40] Power Management version 3

         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

         Capabilities: [70] MSI-X: Enable+ Count=64 Masked-

         Capabilities: [a0] Express Endpoint, MSI 00

         Capabilities: [100] Advanced Error Reporting

         Capabilities: [140] Device Serial Number 00-00-c9-ff-ff-00-00-00

         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)

         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

         Capabilities: [1b0] Access Control Services

         Capabilities: [1c0] Latency Tolerance Reporting

         Kernel driver in use: ixgbe

root@pve1:/sys/bus/pci/devices/0000:03:00.1# modinfo ixgbe

filename:       /lib/modules/4.2.8-1-pve/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko

version:        4.1.5

license:        GPL

description:    Intel(R) 10 Gigabit PCI Express Network Driver

author:         Intel Corporation, <linux.nics@intel.com>

srcversion:     9781CEF8A3110F93FF9DBA8

alias:          pci:v00008086d000015ADsv*sd*bc*sc*i*

alias:          pci:v00008086d00001560sv*sd*bc*sc*i*

alias:          pci:v00008086d00001558sv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Asv*sd*bc*sc*i*

alias:          pci:v00008086d00001557sv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Fsv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Dsv*sd*bc*sc*i*

alias:          pci:v00008086d00001528sv*sd*bc*sc*i*

alias:          pci:v00008086d000010F8sv*sd*bc*sc*i*

alias:          pci:v00008086d0000151Csv*sd*bc*sc*i*

alias:          pci:v00008086d00001529sv*sd*bc*sc*i*

alias:          pci:v00008086d0000152Asv*sd*bc*sc*i*

alias:          pci:v00008086d000010F9sv*sd*bc*sc*i*

alias:          pci:v00008086d00001514sv*sd*bc*sc*i*

alias:          pci:v00008086d00001507sv*sd*bc*sc*i*

alias:          pci:v00008086d000010FBsv*sd*bc*sc*i*

alias:          pci:v00008086d00001517sv*sd*bc*sc*i*

alias:          pci:v00008086d000010FCsv*sd*bc*sc*i*

alias:          pci:v00008086d000010F7sv*sd*bc*sc*i*

alias:          pci:v00008086d00001508sv*sd*bc*sc*i*

alias:          pci:v00008086d000010DBsv*sd*bc*sc*i*

alias:          pci:v00008086d000010F4sv*sd*bc*sc*i*

alias:          pci:v00008086d000010E1sv*sd*bc*sc*i*

alias:          pci:v00008086d000010F1sv*sd*bc*sc*i*

alias:          pci:v00008086d000010ECsv*sd*bc*sc*i*

alias:          pci:v00008086d000010DDsv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Bsv*sd*bc*sc*i*

alias:          pci:v00008086d000010C8sv*sd*bc*sc*i*

alias:          pci:v00008086d000010C7sv*sd*bc*sc*i*

alias:          pci:v00008086d000010C6sv*sd*bc*sc*i*

alias:          pci:v00008086d000010B6sv*sd*bc*sc*i*

depends:        ptp,dca,vxlan

vermagic:       4.2.8-1-pve SMP mod_unload modversions

parm:           InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)

parm:           IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)

parm:           MQ:Disable or enable Multiple Queues, default 1 (array of int)

parm:           DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)

parm:           RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)

parm:           VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default=8) (array of int)

parm:           max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)

parm:           VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)

parm:           InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)

parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)

parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)

parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)

parm:           LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)

parm:           LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)

parm:           FdirPballoc:Flow Director packet buffer allocation level:

                         1 = 8k hash filters or 2k perfect filters

                         2 = 16k hash filters or 4k perfect filters

                         3 = 32k hash filters or 8k perfect filters (array of int)

parm:           AtrSampleRate:Software ATR Tx packet sample rate (array of int)

parm:           FCoE:Disable or enable FCoE Offload, default 1 (array of int)

parm:           LRO:Large Receive Offload (0,1), default 1 = on (array of int)

parm:           allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)

parm:           dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int)

parm:           vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int)

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>