Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Network drivers for I218-V casue application fault in wmiprvse.exe

$
0
0

My Windows Event log shows the following error when the Intel(R) PROSet Monitoring Service is enabled to start automatically with Windows.

 

Source:        Application

Error Event ID:      1000

Task Category: (100)

Level:        Error

 

Faulting application name: wmiprvse.exe, version: 6.1.7601.17514, time stamp: 0x4ce79d42

Faulting module name: CoreAgnt.dll_unloaded, version: 0.0.0.0, time stamp: 0x59f38017

Exception code: 0xc0000005

Fault offset: 0x000007feda158098

Faulting process id: 0xdc0

Faulting application start time: 0x01d36651b58c7ea8

Faulting application path: C:\Windows\system32\wbem\wmiprvse.exe

Faulting module path: CoreAgnt.dll

Report Id: 7c0db145-d247-11e7-9f1a-10c37b4f479c

 

My adapter is "Intel(R) Ethernet Connection (2) I218-V" on an ASUS HM97-PLUS motherboard with Window 7 SP1 64-bit. I am using the latest Intel(R) PROSet version 22.9.6.0. I have been getting this error with earlier version as well, but I can't find anymore at what version the errors began.

 

Also, what exactly does the Intel® PROSet Monitoring Service do for me?

 

Thanks for your advice!


Connect directly X710 to XL710 with QSFP Breakout Cable

$
0
0

Hello!

 

I have a question regarding to the direct connectability of NICs without any switch.

I have a computer equipped with Intel ethernet CNA X710-DA4FH (4x10gbps) and another computer with Intel ethernet CNA XL710-QDA1 (1x40gbps). They are connected directly using Intel X4DACBL3 Ethernet QSFP 40G QSFP+ to 4x10G Breakout Cable. Theoretically, is it a possible setup? Unfortunately, I could not make it work.

 

Thanks in advance!

 

Best regards

ULP enable/disable utility. Where to get?

$
0
0

I have the intel i219-v error 10. Where can I get the ULP enable/disable utlity?

 

Thanks

Steve

i210 blue screen on install and during operation

$
0
0

Hello,

 

I'm running Windows Server 2016 on a Asus P10 WS mainboard. This MB has two i210 network interfaces on-board. Currently, I'm using only one of them, the second one is disabled in Windows device manager.

 

The driver installed has a date of 08.12.2016 and a version of 12.15.184.0 (as shown in device manager.)

 

For the interface in use I have enabled teaming in Windows to use different VLANs. For a while now, there were two virtual adapters "Microsoft Network Adapter Multiplexor Driver #X" in my system, each configured with a different VLAN ID. This has been working quite well for some time now.

 

Recently, I tried to add a third VLAN. During configuration, the machine crashed with a blue screen. After reboot, the third virtual interface was there regardless. Now, I'm experiencing (seemingly) random crashes (blue screens) every once in a while. They seem to be more frequent when the new, third virtual adapter is being used a lot, but I can't really put my finger on it.

 

However, when I run the executable of the latest Intel driver package PROWinx64.exe v22.7.1, there is a reproducible blue screen as soon as the new driver is being installed. There is an additional popup by the installer that says something like "installing driver for i210 Gigabyte connection...". That's when the machine crashes consistently.

 

The memdump analysis says the following (first example is a "random" crash, second one is the "driver install" crash:)

 

***

Probably caused by : e1r65x64.sys ( e1r65x64+150ad )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000028, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff8099451abdf, address which referenced memory

***

 

***

Probably caused by : NdisImPlatform.sys ( NdisImPlatform!implatReturnNetBufferLists+1d6 )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000010, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff806566040b3, address which referenced memory

***

 

(Complete analysis logfile is available if anyone is interested.)

 

How can I fix this?

 

 

Regards

Re: Intel X710-DA4 / VMware ESXi 6.5u1 - Malicious Driver Detection Event using fw 1.3.1

$
0
0

Hi,

 

We have had similar issues we are using the Intell X710. Every few weeks since upgrading to ESXi 6.5 the networking on one of our hosts has been locking up. We stumbled across the below article which described the issue well and also mentions disabling the i40en driver.

 

Install ESXi 6.5 on R730 with Intel X710 |VMware Communities

 

Note we were on firmware version 1.3.1, did this issue get resolved with an upgrade to  1.4.3 NIC Driver for Intel Ethernet Controllers  X710?

 

Looking in some forums some people ordered new nics to get around the issue.

 

Craig

Intel(R) Ethernet Connection I217-LM How many of VLANS can be made

$
0
0

「Intel® Network Adapter Driver for Windows* 8.1 」 is installed in the following personal computers.

 

PC:CELSIUS H730

OS:Windows8.1

NIC:Intel(R) Ethernet Connection I217-LM

 

How many of VLANS can be made in this environment?

x1 vs x8 10gb performance...

$
0
0

I'm testing throughput of a D-1500-based X552 and a X520, and according to lspci their PCIe lane configuration is as follows:

 

admin@capture:~$ sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" |grep -A 1 Ethernet

04:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

04:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

However, I'm getting the exact same numbers for each. It doesn't make sense that the X552 would only get 1 lane.

 

Is what lspci is telling me wrong?

 

Thanks.

ULP Utility

$
0
0

So I am having the same issue many are with the Intel I218-V and seen many recommend the ULP Utility to fix it. Where can I download this? Every post just says a message with the link was sent.


Intel I219-LM no connectivity

$
0
0

Hi all

 

We are currently experiencing an issue on several Dell latitude 5480/5580 laptops that have the Intel I219-LM ethernet adapter.

 

All laptops are running the latest version of Windows 10. Plugging in Ethernet to these laptops, they receive the correct DHCP information. the correct DHCP and DNS servers are displayed in the ethernet properties and they receive the correct IP information. However, after receiving the information from our DHCP server, we are not able to ping: default gateway, DHCP, Active Directory, DNS, or any other internal or external resource.

 

These are connected through Extreme Summit Switches. Connecting the laptops to an external dock with ethernet capability will allow the laptop to regain network connectivity and act normally. The drivers are fully up-to-date. Working on a campus with a number of buildings, separated by VLAN/subnets, we can try troubleshooting techniques and maybe eventually get it to work, either by sheer tenacity or dumb luck -- manually inserting it into our network access control, enabling/disabling ports, etc. But there is no consistent way to fix the issue, it's hit-or-miss.

 

I'm wondering if there is anyone else that is experiencing this issue or has experienced this issue, or if I can get any feedback regarding a way to troubleshoot.

 

If any more information is necessary that I've left out please let me know and I'll be happy to supply that.

 

Thank you.

Audio stutter and system freezing with Intel I350-T4V2

$
0
0

Hello everyone,

I am having an issue with my music, videos, games and general system usage coming to a brief halt every so often by high DPC reported by driver tcpip.sys, which I believe is to be related to the Intel I350-T4V2 NIC that I have in my system. This is the report from LatencyMon that was generated after nine minutes of it running:

_________________________________________________________________________________________________________

CONCLUSION

_________________________________________________________________________________________________________

Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.

LatencyMon has been analyzing your system for  0:09:15  (h:mm:ss) on all processors.

 

 

 

 

_________________________________________________________________________________________________________

SYSTEM INFORMATION

_________________________________________________________________________________________________________

Computer name:                                        STEVEN-DT

OS version:                                           Windows 10 , 10.0, build: 15063 (x64)

Hardware:                                             ASRock, Z370 Gaming K6

CPU:                                                  GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz

Logical processors:                                   12

Processor groups:                                     1

RAM:                                                  32701 MB total

 

 

 

 

_________________________________________________________________________________________________________

CPU SPEED

_________________________________________________________________________________________________________

Reported CPU speed:                                   3696 MHz

Measured CPU speed:                                   1 MHz (approx.)

 

 

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

 

 

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.

 

 

 

 

 

 

_________________________________________________________________________________________________________

MEASURED INTERRUPT TO USER PROCESS LATENCIES

_________________________________________________________________________________________________________

The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

 

 

Highest measured interrupt to process latency (µs):   64369.038961

Average measured interrupt to process latency (µs):   5.038917

 

 

Highest measured interrupt to DPC latency (µs):       64365.437229

Average measured interrupt to DPC latency (µs):       1.360576

 

 

 

 

_________________________________________________________________________________________________________

REPORTED ISRs

_________________________________________________________________________________________________________

Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

 

 

Highest ISR routine execution time (µs):              92.833333

Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Highest reported total ISR routine time (%):          0.100551

Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Total time spent in ISRs (%)                          0.122042

 

 

ISR count (execution time <250 µs):                   864900

ISR count (execution time 250-500 µs):                0

ISR count (execution time 500-999 µs):                0

ISR count (execution time 1000-1999 µs):              0

ISR count (execution time 2000-3999 µs):              0

ISR count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED DPCs

_________________________________________________________________________________________________________

DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

 

 

Highest DPC routine execution time (µs):              75280.324134

Driver with highest DPC routine execution time:       tcpip.sys - TCP/IP Driver, Microsoft Corporation

 

 

Highest reported total DPC routine time (%):          0.043887

Driver with highest DPC total execution time:         nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver, Version 388.31 , NVIDIA Corporation

 

 

Total time spent in DPCs (%)                          0.135294

 

 

DPC count (execution time <250 µs):                   3095769

DPC count (execution time 250-500 µs):                0

DPC count (execution time 500-999 µs):                1

DPC count (execution time 1000-1999 µs):              0

DPC count (execution time 2000-3999 µs):              0

DPC count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED HARD PAGEFAULTS

_________________________________________________________________________________________________________

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

 

 

 

Process with highest pagefault count:                 none

 

 

Total number of hard pagefaults                       0

Hard pagefault count of hardest hit process:          0

Highest hard pagefault resolution time (µs):          0.0

Total time spent in hard pagefaults (%):              0.0

Number of processes hit:                              0

 

 

 

 

_________________________________________________________________________________________________________

PER CPU DATA

_________________________________________________________________________________________________________

CPU 0 Interrupt cycle time (s):                       21.807476

CPU 0 ISR highest execution time (µs):                92.833333

CPU 0 ISR total execution time (s):                   8.127851

CPU 0 ISR count:                                      864377

CPU 0 DPC highest execution time (µs):                418.928030

CPU 0 DPC total execution time (s):                   7.968359

CPU 0 DPC count:                                      2881669

_________________________________________________________________________________________________________

CPU 1 Interrupt cycle time (s):                       5.576597

CPU 1 ISR highest execution time (µs):                18.037338

CPU 1 ISR total execution time (s):                   0.000633

CPU 1 ISR count:                                      523

CPU 1 DPC highest execution time (µs):                156.160173

CPU 1 DPC total execution time (s):                   0.031753

CPU 1 DPC count:                                      2473

_________________________________________________________________________________________________________

CPU 2 Interrupt cycle time (s):                       3.872798

CPU 2 ISR highest execution time (µs):                0.0

CPU 2 ISR total execution time (s):                   0.0

CPU 2 ISR count:                                      0

CPU 2 DPC highest execution time (µs):                111.264610

CPU 2 DPC total execution time (s):                   0.109259

CPU 2 DPC count:                                      30866

_________________________________________________________________________________________________________

CPU 3 Interrupt cycle time (s):                       3.914723

CPU 3 ISR highest execution time (µs):                0.0

CPU 3 ISR total execution time (s):                   0.0

CPU 3 ISR count:                                      0

CPU 3 DPC highest execution time (µs):                75280.324134

CPU 3 DPC total execution time (s):                   0.213586

CPU 3 DPC count:                                      3120

_________________________________________________________________________________________________________

CPU 4 Interrupt cycle time (s):                       4.130378

CPU 4 ISR highest execution time (µs):                0.0

CPU 4 ISR total execution time (s):                   0.0

CPU 4 ISR count:                                      0

CPU 4 DPC highest execution time (µs):                127.520022

CPU 4 DPC total execution time (s):                   0.120647

CPU 4 DPC count:                                      40343

_________________________________________________________________________________________________________

CPU 5 Interrupt cycle time (s):                       3.761527

CPU 5 ISR highest execution time (µs):                0.0

CPU 5 ISR total execution time (s):                   0.0

CPU 5 ISR count:                                      0

CPU 5 DPC highest execution time (µs):                83.285173

CPU 5 DPC total execution time (s):                   0.004639

CPU 5 DPC count:                                      1086

_________________________________________________________________________________________________________

CPU 6 Interrupt cycle time (s):                       4.832866

CPU 6 ISR highest execution time (µs):                0.0

CPU 6 ISR total execution time (s):                   0.0

CPU 6 ISR count:                                      0

CPU 6 DPC highest execution time (µs):                101.324675

CPU 6 DPC total execution time (s):                   0.199118

CPU 6 DPC count:                                      46428

_________________________________________________________________________________________________________

CPU 7 Interrupt cycle time (s):                       3.556605

CPU 7 ISR highest execution time (µs):                0.0

CPU 7 ISR total execution time (s):                   0.0

CPU 7 ISR count:                                      0

CPU 7 DPC highest execution time (µs):                82.946970

CPU 7 DPC total execution time (s):                   0.003708

CPU 7 DPC count:                                      596

_________________________________________________________________________________________________________

CPU 8 Interrupt cycle time (s):                       3.937102

CPU 8 ISR highest execution time (µs):                0.0

CPU 8 ISR total execution time (s):                   0.0

CPU 8 ISR count:                                      0

CPU 8 DPC highest execution time (µs):                136.426407

CPU 8 DPC total execution time (s):                   0.144712

CPU 8 DPC count:                                      39441

_________________________________________________________________________________________________________

CPU 9 Interrupt cycle time (s):                       3.597930

CPU 9 ISR highest execution time (µs):                0.0

CPU 9 ISR total execution time (s):                   0.0

CPU 9 ISR count:                                      0

CPU 9 DPC highest execution time (µs):                82.416126

CPU 9 DPC total execution time (s):                   0.014064

CPU 9 DPC count:                                      6686

_________________________________________________________________________________________________________

CPU 10 Interrupt cycle time (s):                       4.082868

CPU 10 ISR highest execution time (µs):                0.0

CPU 10 ISR total execution time (s):                   0.0

CPU 10 ISR count:                                      0

CPU 10 DPC highest execution time (µs):                122.268939

CPU 10 DPC total execution time (s):                   0.162320

CPU 10 DPC count:                                      36323

_________________________________________________________________________________________________________

CPU 11 Interrupt cycle time (s):                       3.982992

CPU 11 ISR highest execution time (µs):                0.0

CPU 11 ISR total execution time (s):                   0.0

CPU 11 ISR count:                                      0

CPU 11 DPC highest execution time (µs):                81.980519

CPU 11 DPC total execution time (s):                   0.038943

CPU 11 DPC count:                                      6742

_________________________________________________________________________________________________________

I have tried updating my I350's driver to the latest version (12.15.184.0) but the problem persists. I have the latest UEFI update from ASRock (v1.30) and Windows is up to date with all the latest updates. I am at a loss at what to do to solve this issue.

 

Thanks in advance.

Network drivers for I218-V casue application fault in wmiprvse.exe

$
0
0

My Windows Event log shows the following error when the Intel(R) PROSet Monitoring Service is enabled to start automatically with Windows.

 

Source:        Application

Error Event ID:      1000

Task Category: (100)

Level:        Error

 

Faulting application name: wmiprvse.exe, version: 6.1.7601.17514, time stamp: 0x4ce79d42

Faulting module name: CoreAgnt.dll_unloaded, version: 0.0.0.0, time stamp: 0x59f38017

Exception code: 0xc0000005

Fault offset: 0x000007feda158098

Faulting process id: 0xdc0

Faulting application start time: 0x01d36651b58c7ea8

Faulting application path: C:\Windows\system32\wbem\wmiprvse.exe

Faulting module path: CoreAgnt.dll

Report Id: 7c0db145-d247-11e7-9f1a-10c37b4f479c

 

My adapter is "Intel(R) Ethernet Connection (2) I218-V" on an ASUS HM97-PLUS motherboard with Window 7 SP1 64-bit. I am using the latest Intel(R) PROSet version 22.9.6.0. I have been getting this error with earlier version as well, but I can't find anymore at what version the errors began.

 

Also, what exactly does the Intel® PROSet Monitoring Service do for me?

 

Thanks for your advice!

I210 External Flash Programming with a Tegra X2

$
0
0

I'm making a carrier card for a Tegra X2 computer on module (running Linux 4.4.38-tegra aarch64) and using the Intel I210 to get a second Ethernet port.  I've verified that the I210 works with the Tegra using an external Flash chip programmed before putting it on the board with an image copied from an I210 adapter card Flash.  To make it easier to manufacture I'd like to program the Flash in-circuit.  I've seen the BootUtil and EEUPDATE used to program the Flash and have downloaded both.  Is one of these better for an application like this?

 

I'm new to Linux and relying on Google to fill in the blanks, so I've used "sudo -i" to get into root where the EEUPDATE guide says to and followed the reset of the install for Linus directions as I could, it says its successful but doesn't seem to be working.

 

root@lamont-tx2|~> cd Temporary/

root@lamont-tx2|Temporary> ls

install  iqvlinux.tar.gz

root@lamont-tx2|Temporary> chmod 777 install

root@lamont-tx2|Temporary> ls

install  iqvlinux.tar.gz

root@lamont-tx2|Temporary> ./install

Extracting archive..OK!

Compiling the driver...OK!

Removing existing iqvlinux.ko driver...OK

Copying iqvlinux.ko driver file to /lib/modules directory...OK!

Driver installation verification...Installed!

root@lamont-tx2|~> exit

logout

nvidia@lamont-tx2|/> EEUPDATE

-bash: EEUPDATE: command not found

 

From reading the guide it sounds like the EEUPDATE without options should give me a list of devices.  What am I not doing or is this a larger problem?  The I210 shows up using the lspci as an un-programmed device.

 

nvidia@lamont-tx2|~> lspci

00:01.0 PCI bridge: NVIDIA Corporation Device 10e5 (rev a1)

01:00.0 Ethernet controller: Intel Corporation Device 1531 (rev 03)

 

The BootUtil looks like the better choice but my inexperience with Linux is getting in the way.  I've stopped with the BootUtil at the "2.  Compile the driver module." from the guide.  I've found the following directions for compiling a driver but I'm not sure if that's what I need to do with it.

Compiling Drivers for Linux and Adding Them to your Linux Automation Image | Symantec Connect

 

To clarify what I'm after, can I program the Flash in-circuit with the Tegra X2, if so is BootUtil or EEUPDATE a better choice, am I on the right trail with using these tools,  is there any help out there to get me down the road quicker?

 

Any help would be very appreciated.

 

Thanks,

 

LaMont

Intel I210-T1 and disabling MCTP

$
0
0

So I noticed the I210-T1 supports MCTP and not seeing where I can disable this, checked the firmware boot config guide and it was conspicuously blank: Upgrade, Enable, or Disable Flash with the Intel® Ethernet Flash...

 

Also anybody know if the I210-T1 hooks into IME if the onboard (which IME normally uses) is disabled, i.e. I get the feeling Intel is sneaky like that.

 

Message was edited by: Peter Thoenen - Typo

Reset adapter unexpectedly

$
0
0

I have a system I use as a gateway:

Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)

 

It is a component of a fit/PC Intense running Linux is the 4.9.67-1-lts #1 SMP kernel from Arch Linux.  This problem has been going on for several kernels now over the last month.

 

Intense PC models specifications

 

It has been having trouble reseting and possibly check summing.  I use a ssh proxy tunnel which has always remained open and stable to panix.com.  Now it dies

 

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 3: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 44: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 45: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 46: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 47: new [direct-tcpip]

debug1: Connection to port 9000 forwarding to 127.0.0.1 port 8008 requested.

debug1: channel 48: new [direct-tcpip]

ssh_dispatch_run_fatal: Connection to 166.84.1.2 port 22: message authentication code incorrect

[ruben@flatbush ~]$

 

from dmesg:

[38972.211043] ffff88041f798da0: f4 6e af f6 83 02 1d 62 8c 12 0f 1a e9 99 89 cf  .n.....b........

[38972.211044] ffff88041f798db0: d3 b9 fc d0 d4 7a dc 96 d0 b0 f9 c3 bb fe c7 ea  .....z..........

[38972.211046] ffff88041f798dc0: 39 51 ea 2c 6d 80 64 43 59 da 15 91 3a 6c 40 05  9Q.,m.dCY...:l@.

[38972.211047] ffff88041f798dd0: a6 50 40 98 d0 bb cf f3 c7 1f e2 41 bd ec 19 05  .P@........A....

[38972.211049] ffff88041f798de0: ab 51 d0 9a 46 90 9b db 3b b2 fd c1 c9 c0 ef 7b  .Q..F...;......{

[38972.211051] ffff88041f798df0: 2a 45 2b f6 fb 42 96 56 af 45 89 23 99 51 4e ef  *E+..B.V.E.#.QN.

[38972.211053] ffff88041f798e00: fd 6c 9a df 51 66 db 3e ca d9 4c f2 ad cb 89 7a  .l..Qf.>..L....z

[38972.211054] ffff88041f798e10: c4 10 47 21 fc a5 4d f3 06 79 1f 8a ba d1 45 53  ..G!..M..y....ES

[38972.211056] ffff88041f798e20: bb 68 0b 8a ab 63 ae b2 1c 38 b0 35 6f 37 66 ec  .h...c...8.5o7f.

[38972.211057] ffff88041f798e30: 02 4a                                            .J

[38972.211062] e1000e 0000:00:19.0 eth1: Reset adapter unexpectedly

[38978.157727] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

[39014.162430] elogind[1369]: New session c4 of user ruben.

[39089.012361] lp: driver loaded but no devices found

[39089.040701] st: Version 20160209, fixed bufsize 32768, s/g segs 256

 

 

It has been suggested that I can monkey with the module settings, but I'm not certain what to try or to procede with.  Guessing blindly is usually not useful

 

 

modinfo give me this:

 

alias:      pci:v00008086d0000105Esv*sd*bc*sc*i*
depends:    ptp
intree:     Y
vermagic:   4.9.67-1-lts SMP mod_unload modversions
parm:       debug:Debug level (0=none,...,16=all) (int)
parm:       copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm:       TxIntDelay:Transmit Interrupt Delay (array of int)
parm:       TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm:       RxIntDelay:Receive Interrupt Delay (array of int)
parm:       RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm:       InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm:       IntMode:Interrupt Mode (array of int)
parm:       SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm:       KumeranLockLoss:Enable Kumeran lock loss workaround (array of int)
parm:       WriteProtectNVM:Write-protect NVM [WARNING: disabling this can lead to corrupted NVM] (array of int)
parm:       CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int)

Intel® PRO/1000 PT Server Adapter esn code?

$
0
0

I need to know the esn code for PRO/1000 PT SERVER ADAPTER.


x520-sr2

$
0
0

I have pair of supermicro servers with intel x520-sr2.  With nic teaming getting 1000's of  log events per second stating it received packets on wrong nic.  Being a team I can see this as a posibilty, but does it need to flood me with messages about it ?

 

No matching TeamNic found for packets received from member NDISIMPLATFORM\Parameters\Adapters\

Received LACPDU on Member {27EF013E-0DD4-497E-90A2-7E5AC30E6E84}. Buffer= 0x0180C2000002C4F57C565A6F880901010114008001E0520000021904008031043D0000000214000090E2BA92F1D80000000001001F0000000310050000000000000000000000000000000...

 

event id 25,26, 27

drivers are the most current -

 

 

 

 

boatmn810

Ixgbe driver support for X552 controller and external phy

$
0
0

Hi

 

I am using X552 controller(ADLINK com-ex7) with external phy (AQR107).

 

Just wanted to know if this configuration is supported in ixgbe driver or not. If not is there any patch available to support this configuration.

 

currently i am getting following error:

 

[    0.748518] pci 0000:03:00.1: [8086:15ad] type 00 class 0x020000

[    0.748532] pci 0000:03:00.1: reg 0x10: [mem 0xfb200000-0xfb3fffff 64bit pref]

[    0.748553] pci 0000:03:00.1: reg 0x20: [mem 0xfb600000-0xfb603fff 64bit pref]

[    0.748560] pci 0000:03:00.1: reg 0x30: [mem 0xfb800000-0xfb87ffff pref]

[    0.748606] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold

[    0.748623] pci 0000:03:00.1: reg 0x184: [mem 0xfba00000-0xfba03fff 64bit]

[    0.748625] pci 0000:03:00.1: VF(n) BAR0 space: [mem 0xfba00000-0xfbafffff 64bit] (contains BAR0 for 64 VFs)

[    0.749006] pci 0000:03:00.1: reg 0x190: [mem 0xfb900000-0xfb903fff 64bit]

[    0.749008] pci 0000:03:00.1: VF(n) BAR3 space: [mem 0xfb900000-0xfb9fffff 64bit] (contains BAR3 for 64 VFs)

[   14.380732] ixgbe 0000:03:00.1: HW Init failed: -17

[   14.382261] ixgbe: probe of 0000:03:00.1 failed with error -17

 

 

Thanks

IPsec offload

$
0
0

Hello

i'm looking for Intel NIC  that do ipsec offload to the NIC

i saw some datashet e.g 540 that claims that HW supports ipsec offload - but there is no support in linux driver

Thanks Avi

MTBF for I350T4V2

$
0
0

Hello,

 

I am in need of the MTBF value for the Ethernet Server Adapter "I350T4V2" (Quad Port Copper).

Can some one please provide the value OR guide me to the specification where it is mentioned.

 

Thanks in advance.

MTBF for I350T4V2

$
0
0

Hello,

 

I am in need of the MTBF value for the Ethernet Server Adapter "I350T4V2" (Quad Port Copper).

Can some one please provide and guide me to the specification where it is mentioned.

 

Thanks

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>