Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Intel Network Adapter Driver for Wins 7 - Very Slow Response On Access the tab On Windows Device Manager

$
0
0

I had downloaded and installed the latest V22.10 of the above mentioned (PROWinx64Legacy.exe) on a Beckhoff IPC running windows 7 professional x64 Edition with SP1 for the purposes of NIC teaming of the installed intel adapters.  After the successful installation, I encountered the following:

 

(1)  I found that it takes a long time (around 8 to 10 secs) for the properties window of the various intel adapter to show up once it is accessed. The added tabs (like the Teaming, the Link Speed) also behaves the same and takes a long time (says 5 to 8 secs) to be activated and display its contents. This only affect the intel network adapters.  I installed the driver on two IPCs and the observation is the same. I try to uninstall and reinstall the driver on the same 2 IPCs with no improvement.

 

Is this normal or is this a bugs in the new latest driver? Anyone encounter the same?

 

 

(2) I had the following 4 intel adapters on the said IPC.

(a) x2 intel 82574L Gigabit Network adapter (on add on PCIxe Card)

(b) x1 intel I210 Gigabit Network adapter

(c) x1intel I219-LM Gigabit Network adapter

 

 

I am not able to team the two intel 82574L adapters as a team, it gives an error: each team must include at least one Intel server device or intel integrated connection that support teaming. However I am able to team one intel 82574L adapter with either the I210 or the I219_LM adapter as a team.

What is intel integrated connection? The intel 82574L adapters are not server adapters?

 


I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

Intel EXPI9301ct: counterfeit or not

$
0
0

Hi!

 

I need to buy a network card Intel EXPI9301CT and want to know how to distinguish original card from counterfeit.

 

I know that original card have the following:

1) RJ-45 within opened bus pins and orange tape over it

2) Transformer 'Delta'

3) YottaMark golographic mark (for cards produced since 2009 year)

 

BUT cards I have seen in the market are differ from original one:

1) RJ-45 pinout contained in the black box

2) Transormer 'Pulse'

3) No any YottaMarks

 

They looks like as one the ebay page: INTEL EXPI9301CT Gigabit CT Desktop PCI-e Network Adapter 82574L Chipset NIC | eBay

 

Please help me to know how to distinguish original Intel expi9301ct:

 

1) Is Intel uses 'Pulse' transformers on its network cards? Is it means that the card is counterfeit?

2) Is YottaMark should be on ALL Intel cards?

 

Thank you!

sr-iov invoke system reboot

$
0
0

Dears, I use sr-iov vf for docker network device by kubernetes, but there has a bigger probability of system reboot

syslogs record nothing, and only happened when there are release & allocate VF

what should I do for solve this problem?

 

My environment:

  BIOS already enabled VT-d and SR-IOV global enable

  os is ubuntu 16.04, kernel version 4.10, options is

       BOOT_IMAGE=/vmlinuz-4.10.0-35-lowlatency root=UUID=xxxx ro cgroup_enable=memory swapaccount=1 intel_iommu=on iommu=pt

  set max_vfs to 63

       echo 63 >/sys/class/net/enp4s0f1/device/sriov_numvfs

 

  PCI device info

04:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

04:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)

.....

 

   $ ethtool -i enp4s0f1

driver: ixgbe

version: 4.4.0-k

firmware-version: 0x800007f5

expansion-rom-version:

bus-info: 0000:04:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

Intel Network Adapter Driver for Windows 7 installing incorrect version

$
0
0

I have downloaded and installed the latest Intel Network Adapter Driver for Windows 7 (64-bit) marked as version 22.10. However, what ends up getting installed is version 22.9.6 which was the previous release.

 

Download Intel® Network Adapter Driver for Windows 7*

 

Not a big deal, everything functions fine, but is anyone else noticing this?

x520-sr2

$
0
0

I have pair of supermicro servers with intel x520-sr2.  With nic teaming getting 1000's of  log events per second stating it received packets on wrong nic.  Being a team I can see this as a posibilty, but does it need to flood me with messages about it ?

 

No matching TeamNic found for packets received from member NDISIMPLATFORM\Parameters\Adapters\

Received LACPDU on Member {27EF013E-0DD4-497E-90A2-7E5AC30E6E84}. Buffer= 0x0180C2000002C4F57C565A6F880901010114008001E0520000021904008031043D0000000214000090E2BA92F1D80000000001001F0000000310050000000000000000000000000000000...

 

event id 25,26, 27

drivers are the most current -

 

 

 

 

boatmn810

I219-LM UDP Transport Issue

$
0
0

We have procured over a dozen NUC6i7KYK (mfg 07/2017) to act as a receiver of a moderate rate (~320Mbps) UDP stream of data. We have found that the I219-LM adapter is dropping packets. We have been able to replicate the issue using the iperf tool. For the application (and for my troubleshooting tests) the NUC will be direct connected to the data source via a new, short (~1m), patch cord.

 

We have the latest BIOS (KYSKLi70.86A.0050.2017.0831.1924) and we are using Ubuntu 16.04.3. We have also tried Ubuntu 16.04.2 and 17.10. We have tried the distro provided e1000e 3.2.6-k driver as well as the latest e1000e 3.3.6 driver.

We have set net.core.rmem_max to 33554432 and net.core.wmem_max to 576000.

We have set coalescence to zero: ethtool -C eno1 rx-usecs 0.

We have maxed the receive ring buffer: ethtool -G eno1 rx 4096.

We have ensured max CPU frequency via cpufreq setting it to "performance".

 

Data loss is non-deterministic and sometimes happens immediately, after several minutes, or several hours. Typically it is a single missing packet. We can not afford any lost data.

 

I had a ZBOX-CI323NANO with openSuSE 42.3 on it. This low performance device has Realtek RTL8111/8168/8411 PCIe GbE adapters. It received UDP data with zero losses. That's the sort of results I expect from this NUC.

It has net.core.rmem_max and net.core.wmem_max set to 16777216.


We also have used a Pluggable USB3.0 GbE adapter. It also runs with no lost data - however, it tends to "go away" after indeterminate periods of time - not sure if that's a NUC, OS, or pluggable issue.

 

I saw this thread that looks familiar to my problem:UDP packets frozen with at least I219-V, I219-LM and I217-LM

 

When I run wireshark, occasionally I see the interpacket gap be ~10ms instead of the typical handful of microseconds.

Ixgbe driver support for X552 controller and external phy

$
0
0

Hi

 

I am using X552 controller(ADLINK com-ex7) with external phy (AQR107).

 

Just wanted to know if this configuration is supported in ixgbe driver or not. If not is there any patch available to support this configuration.

 

currently i am getting following error:

 

[    0.748518] pci 0000:03:00.1: [8086:15ad] type 00 class 0x020000

[    0.748532] pci 0000:03:00.1: reg 0x10: [mem 0xfb200000-0xfb3fffff 64bit pref]

[    0.748553] pci 0000:03:00.1: reg 0x20: [mem 0xfb600000-0xfb603fff 64bit pref]

[    0.748560] pci 0000:03:00.1: reg 0x30: [mem 0xfb800000-0xfb87ffff pref]

[    0.748606] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold

[    0.748623] pci 0000:03:00.1: reg 0x184: [mem 0xfba00000-0xfba03fff 64bit]

[    0.748625] pci 0000:03:00.1: VF(n) BAR0 space: [mem 0xfba00000-0xfbafffff 64bit] (contains BAR0 for 64 VFs)

[    0.749006] pci 0000:03:00.1: reg 0x190: [mem 0xfb900000-0xfb903fff 64bit]

[    0.749008] pci 0000:03:00.1: VF(n) BAR3 space: [mem 0xfb900000-0xfb9fffff 64bit] (contains BAR3 for 64 VFs)

[   14.380732] ixgbe 0000:03:00.1: HW Init failed: -17

[   14.382261] ixgbe: probe of 0000:03:00.1 failed with error -17

 

 

Thanks


Audio stutter and system freezing with Intel I350-T4V2

$
0
0

Hello everyone,

I am having an issue with my music, videos, games and general system usage coming to a brief halt every so often by high DPC reported by driver tcpip.sys, which I believe is to be related to the Intel I350-T4V2 NIC that I have in my system. This is the report from LatencyMon that was generated after nine minutes of it running:

_________________________________________________________________________________________________________

CONCLUSION

_________________________________________________________________________________________________________

Your system appears to be having trouble handling real-time audio and other tasks. You are likely to experience buffer underruns appearing as drop outs, clicks or pops. One or more DPC routines that belong to a driver running in your system appear to be executing for too long. At least one detected problem appears to be network related. In case you are using a WLAN adapter, try disabling it to get better results. One problem may be related to power management, disable CPU throttling settings in Control Panel and BIOS setup. Check for BIOS updates.

LatencyMon has been analyzing your system for  0:09:15  (h:mm:ss) on all processors.

 

 

 

 

_________________________________________________________________________________________________________

SYSTEM INFORMATION

_________________________________________________________________________________________________________

Computer name:                                        STEVEN-DT

OS version:                                           Windows 10 , 10.0, build: 15063 (x64)

Hardware:                                             ASRock, Z370 Gaming K6

CPU:                                                  GenuineIntel Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz

Logical processors:                                   12

Processor groups:                                     1

RAM:                                                  32701 MB total

 

 

 

 

_________________________________________________________________________________________________________

CPU SPEED

_________________________________________________________________________________________________________

Reported CPU speed:                                   3696 MHz

Measured CPU speed:                                   1 MHz (approx.)

 

 

Note: reported execution times may be calculated based on a fixed reported CPU speed. Disable variable speed settings like Intel Speed Step and AMD Cool N Quiet in the BIOS setup for more accurate results.

 

 

WARNING: the CPU speed that was measured is only a fraction of the CPU speed reported. Your CPUs may be throttled back due to variable speed settings and thermal issues. It is suggested that you run a utility which reports your actual CPU frequency and temperature.

 

 

 

 

 

 

_________________________________________________________________________________________________________

MEASURED INTERRUPT TO USER PROCESS LATENCIES

_________________________________________________________________________________________________________

The interrupt to process latency reflects the measured interval that a usermode process needed to respond to a hardware request from the moment the interrupt service routine started execution. This includes the scheduling and execution of a DPC routine, the signaling of an event and the waking up of a usermode thread from an idle wait state in response to that event.

 

 

Highest measured interrupt to process latency (µs):   64369.038961

Average measured interrupt to process latency (µs):   5.038917

 

 

Highest measured interrupt to DPC latency (µs):       64365.437229

Average measured interrupt to DPC latency (µs):       1.360576

 

 

 

 

_________________________________________________________________________________________________________

REPORTED ISRs

_________________________________________________________________________________________________________

Interrupt service routines are routines installed by the OS and device drivers that execute in response to a hardware interrupt signal.

 

 

Highest ISR routine execution time (µs):              92.833333

Driver with highest ISR routine execution time:       dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Highest reported total ISR routine time (%):          0.100551

Driver with highest ISR total time:                   dxgkrnl.sys - DirectX Graphics Kernel, Microsoft Corporation

 

 

Total time spent in ISRs (%)                          0.122042

 

 

ISR count (execution time <250 µs):                   864900

ISR count (execution time 250-500 µs):                0

ISR count (execution time 500-999 µs):                0

ISR count (execution time 1000-1999 µs):              0

ISR count (execution time 2000-3999 µs):              0

ISR count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED DPCs

_________________________________________________________________________________________________________

DPC routines are part of the interrupt servicing dispatch mechanism and disable the possibility for a process to utilize the CPU while it is interrupted until the DPC has finished execution.

 

 

Highest DPC routine execution time (µs):              75280.324134

Driver with highest DPC routine execution time:       tcpip.sys - TCP/IP Driver, Microsoft Corporation

 

 

Highest reported total DPC routine time (%):          0.043887

Driver with highest DPC total execution time:         nvlddmkm.sys - NVIDIA Windows Kernel Mode Driver, Version 388.31 , NVIDIA Corporation

 

 

Total time spent in DPCs (%)                          0.135294

 

 

DPC count (execution time <250 µs):                   3095769

DPC count (execution time 250-500 µs):                0

DPC count (execution time 500-999 µs):                1

DPC count (execution time 1000-1999 µs):              0

DPC count (execution time 2000-3999 µs):              0

DPC count (execution time >=4000 µs):                 0

 

 

 

 

_________________________________________________________________________________________________________

REPORTED HARD PAGEFAULTS

_________________________________________________________________________________________________________

Hard pagefaults are events that get triggered by making use of virtual memory that is not resident in RAM but backed by a memory mapped file on disk. The process of resolving the hard pagefault requires reading in the memory from disk while the process is interrupted and blocked from execution.

 

 

 

 

Process with highest pagefault count:                 none

 

 

Total number of hard pagefaults                       0

Hard pagefault count of hardest hit process:          0

Highest hard pagefault resolution time (µs):          0.0

Total time spent in hard pagefaults (%):              0.0

Number of processes hit:                              0

 

 

 

 

_________________________________________________________________________________________________________

PER CPU DATA

_________________________________________________________________________________________________________

CPU 0 Interrupt cycle time (s):                       21.807476

CPU 0 ISR highest execution time (µs):                92.833333

CPU 0 ISR total execution time (s):                   8.127851

CPU 0 ISR count:                                      864377

CPU 0 DPC highest execution time (µs):                418.928030

CPU 0 DPC total execution time (s):                   7.968359

CPU 0 DPC count:                                      2881669

_________________________________________________________________________________________________________

CPU 1 Interrupt cycle time (s):                       5.576597

CPU 1 ISR highest execution time (µs):                18.037338

CPU 1 ISR total execution time (s):                   0.000633

CPU 1 ISR count:                                      523

CPU 1 DPC highest execution time (µs):                156.160173

CPU 1 DPC total execution time (s):                   0.031753

CPU 1 DPC count:                                      2473

_________________________________________________________________________________________________________

CPU 2 Interrupt cycle time (s):                       3.872798

CPU 2 ISR highest execution time (µs):                0.0

CPU 2 ISR total execution time (s):                   0.0

CPU 2 ISR count:                                      0

CPU 2 DPC highest execution time (µs):                111.264610

CPU 2 DPC total execution time (s):                   0.109259

CPU 2 DPC count:                                      30866

_________________________________________________________________________________________________________

CPU 3 Interrupt cycle time (s):                       3.914723

CPU 3 ISR highest execution time (µs):                0.0

CPU 3 ISR total execution time (s):                   0.0

CPU 3 ISR count:                                      0

CPU 3 DPC highest execution time (µs):                75280.324134

CPU 3 DPC total execution time (s):                   0.213586

CPU 3 DPC count:                                      3120

_________________________________________________________________________________________________________

CPU 4 Interrupt cycle time (s):                       4.130378

CPU 4 ISR highest execution time (µs):                0.0

CPU 4 ISR total execution time (s):                   0.0

CPU 4 ISR count:                                      0

CPU 4 DPC highest execution time (µs):                127.520022

CPU 4 DPC total execution time (s):                   0.120647

CPU 4 DPC count:                                      40343

_________________________________________________________________________________________________________

CPU 5 Interrupt cycle time (s):                       3.761527

CPU 5 ISR highest execution time (µs):                0.0

CPU 5 ISR total execution time (s):                   0.0

CPU 5 ISR count:                                      0

CPU 5 DPC highest execution time (µs):                83.285173

CPU 5 DPC total execution time (s):                   0.004639

CPU 5 DPC count:                                      1086

_________________________________________________________________________________________________________

CPU 6 Interrupt cycle time (s):                       4.832866

CPU 6 ISR highest execution time (µs):                0.0

CPU 6 ISR total execution time (s):                   0.0

CPU 6 ISR count:                                      0

CPU 6 DPC highest execution time (µs):                101.324675

CPU 6 DPC total execution time (s):                   0.199118

CPU 6 DPC count:                                      46428

_________________________________________________________________________________________________________

CPU 7 Interrupt cycle time (s):                       3.556605

CPU 7 ISR highest execution time (µs):                0.0

CPU 7 ISR total execution time (s):                   0.0

CPU 7 ISR count:                                      0

CPU 7 DPC highest execution time (µs):                82.946970

CPU 7 DPC total execution time (s):                   0.003708

CPU 7 DPC count:                                      596

_________________________________________________________________________________________________________

CPU 8 Interrupt cycle time (s):                       3.937102

CPU 8 ISR highest execution time (µs):                0.0

CPU 8 ISR total execution time (s):                   0.0

CPU 8 ISR count:                                      0

CPU 8 DPC highest execution time (µs):                136.426407

CPU 8 DPC total execution time (s):                   0.144712

CPU 8 DPC count:                                      39441

_________________________________________________________________________________________________________

CPU 9 Interrupt cycle time (s):                       3.597930

CPU 9 ISR highest execution time (µs):                0.0

CPU 9 ISR total execution time (s):                   0.0

CPU 9 ISR count:                                      0

CPU 9 DPC highest execution time (µs):                82.416126

CPU 9 DPC total execution time (s):                   0.014064

CPU 9 DPC count:                                      6686

_________________________________________________________________________________________________________

CPU 10 Interrupt cycle time (s):                       4.082868

CPU 10 ISR highest execution time (µs):                0.0

CPU 10 ISR total execution time (s):                   0.0

CPU 10 ISR count:                                      0

CPU 10 DPC highest execution time (µs):                122.268939

CPU 10 DPC total execution time (s):                   0.162320

CPU 10 DPC count:                                      36323

_________________________________________________________________________________________________________

CPU 11 Interrupt cycle time (s):                       3.982992

CPU 11 ISR highest execution time (µs):                0.0

CPU 11 ISR total execution time (s):                   0.0

CPU 11 ISR count:                                      0

CPU 11 DPC highest execution time (µs):                81.980519

CPU 11 DPC total execution time (s):                   0.038943

CPU 11 DPC count:                                      6742

_________________________________________________________________________________________________________

I have tried updating my I350's driver to the latest version (12.15.184.0) but the problem persists. I have the latest UEFI update from ASRock (v1.30) and Windows is up to date with all the latest updates. I am at a loss at what to do to solve this issue.

 

Thanks in advance.

x1 vs x8 10gb performance...

$
0
0

I'm testing throughput of a D-1500-based X552 and a X520, and according to lspci their PCIe lane configuration is as follows:

 

admin@capture:~$ sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" |grep -A 1 Ethernet

04:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

04:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552 10 GbE SFP+

        LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

        LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

However, I'm getting the exact same numbers for each. It doesn't make sense that the X552 would only get 1 lane.

 

Is what lspci is telling me wrong?

 

Thanks.

i210 blue screen on install and during operation

$
0
0

Hello,

 

I'm running Windows Server 2016 on a Asus P10 WS mainboard. This MB has two i210 network interfaces on-board. Currently, I'm using only one of them, the second one is disabled in Windows device manager.

 

The driver installed has a date of 08.12.2016 and a version of 12.15.184.0 (as shown in device manager.)

 

For the interface in use I have enabled teaming in Windows to use different VLANs. For a while now, there were two virtual adapters "Microsoft Network Adapter Multiplexor Driver #X" in my system, each configured with a different VLAN ID. This has been working quite well for some time now.

 

Recently, I tried to add a third VLAN. During configuration, the machine crashed with a blue screen. After reboot, the third virtual interface was there regardless. Now, I'm experiencing (seemingly) random crashes (blue screens) every once in a while. They seem to be more frequent when the new, third virtual adapter is being used a lot, but I can't really put my finger on it.

 

However, when I run the executable of the latest Intel driver package PROWinx64.exe v22.7.1, there is a reproducible blue screen as soon as the new driver is being installed. There is an additional popup by the installer that says something like "installing driver for i210 Gigabyte connection...". That's when the machine crashes consistently.

 

The memdump analysis says the following (first example is a "random" crash, second one is the "driver install" crash:)

 

***

Probably caused by : e1r65x64.sys ( e1r65x64+150ad )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000028, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff8099451abdf, address which referenced memory

***

 

***

Probably caused by : NdisImPlatform.sys ( NdisImPlatform!implatReturnNetBufferLists+1d6 )

[...]

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high.  This is usually

caused by drivers using improper addresses.

If kernel debugger is available get stack backtrace.

Arguments:

Arg1: 0000000000000010, memory referenced

Arg2: 0000000000000002, IRQL

Arg3: 0000000000000000, value 0 = read operation, 1 = write operation

Arg4: fffff806566040b3, address which referenced memory

***

 

(Complete analysis logfile is available if anyone is interested.)

 

How can I fix this?

 

 

Regards

IPsec offload

$
0
0

Hello

i'm looking for Intel NIC  that do ipsec offload to the NIC

i saw some datashet e.g 540 that claims that HW supports ipsec offload - but there is no support in linux driver

Thanks Avi

whatsapp:#(+27)838808170):Buy Ielts Certificate Without Exam in Bahrain, Bahrein,Kuwait,UAE.SAUDI ARABIA/AND MORE..

$
0
0

Buy 100% Authentic IELTS,TOEFL,Certificates,Passports,Drivers License,ID Cards,Visas,Counterfeit Money,$,€,£:Whatsapp#(+27)83 880 8170)

 

 

How to Apply for TOEFL,IELTS Certificates without Exams.

 

 

Apply for IELTS CELTA/DELTA, PTE, OET, TESOL, NEBOSH, ARRT, LSAT, ( etc) Certificates online without exams in Saudi Arabia,Ukraine,U,Dubai and others passports, Drivers Licenses, IDs documentsContact us for more info.

 

 

PURCHASE ONLINE 100% HIGH QUALITY UNDETECTABLE COUNTERFEIT Banknotes

 

 

We offer our exclusive clients the ability to gain IELTS,TOEFL,ESOL AUTODESK certificates

without taking the exams. The regions we cover are Asia ,UAE, Qatar,

Oman, Saudi Arabia, Jordan,Kuwait ,Australia ,Canada and Europe

 

 

Purchase Real and Novelty Passports,id cards,Visas,Drivers License ,Counterfeits Email us at for details

 

 

Are you trying to change your nationality ? do you need work papers ? do you want to travel ? do you need papers you cant have ?if yes , then you are in the right place at the right time We are an independent group of specialized IT professionals and data base technicians who are specialized in the production of Real and Novelty quality documents such as passports,drivers license,id cards,stamps,visas,diplomas of very high quality and other products for all countries: USA, Australia,UK, Belgium, Brazil, Canada, Italian, Finland, France, Germany, Israel, Mexico, Netherlands, South Africa, Spain, Switzerland, . This list is not full.contact General support: . To get the additional information and place the order,

 

 

BEST COUNTERFEIT EURO,POUNDS & DOLLAR

 

 

CONTACT DETAILS:   toeflmasters2018@gmail.com

 

 

 

 

Skype id:: raul bestpro

 

 

Contact Phone:===== (+27)83 880 8170

 

 

whatsapp/Viber/IMO #(+27838808170)

 

 

 

 

Buy orginal IELTS,TOEFL,ESOL AUTODESK CERTIFICATES Without Exams

BUY IELTS,TOEFL,ESOL AUTODESK orginal CERTIFICATES in Qatar,Dubai and Saudi Arabia

BUY IELTS,TOEFL,ESOL AUTODESK CERTIFICATES in UAE and Saudi Arabia

SFP+ Not Detecting In X710-DA4

$
0
0

Hi Team,

 

We have Intel X710-DA4 NIC cards installed in our newly deployed Lenovo Thinkserver SD350. OS in the server is VMware ESXi 6.5

 

We are using other vendor SFP+ in the card but the card is not suppoting the SFP+. Link status is down and no led.

But the same SFP+ is working fine in Intel X520 NIC card in the same model server. X710 have driver i40e.

 

Please help to fix this issue.

 

Thanks!

 

X710-card no working module

$
0
0

"You cannot use other brands of SFP+ (10 Gbps) modules with these adapters".

 

Dear Intel you are loosing client. Today we have problem in our network, we have X710 cards but no working modules.. it is last time we buying it.


We need JHL6540 chip PCB design document, please send to me, thank you very much!

$
0
0

We need JHL6540 chip PCB design document, please send to me, thank you very much!

FreeBSD/OPNsense igb issue with I211-AT embedded nics, link gone sometimes, only ifconfig d/u fixes it

$
0
0

Hi all,

 

I'm doing intensive testing of OPNsense (firewall solution based on FreeBSD 11.0/11.1) with a device from Jetway with 10 embedded i211-AT NICs: JBC390F541AA :: HBJC390F541AA19B :: Intel Celeron Bay Trail J1900 3.5" SBC Barebone :: JETWAY COMPUTER CORP.

 

From time to time, the link is not responding any more. E.g. a ping to a neighbor system does not work any more. I do not get any hints in log files.

Only doing a "ifconfig igbX down" followed by a "ifconfig igbX up" fixes the issue.

 

I did extensive testing which I have documented here:

WAN link gone sometimes (igb driver, I211 nics), ifconfig d/u fixes it (forum.opnsense.org)

 

As I'm still seeing this issue from time to time, I'd like to ask this question here in case that anybody else has ever seen this issue before.

 

Best regards,

Werner

I210 Network Teaming SFT Failover 12 Second Delay

$
0
0

Teaming failover takes ~12 seconds using I210 adapters on Windows 10. Teaming failover with previous system configurations occurred under 1 second. The Wireshark capture is attached showing the data transfer from PC2 (192.168.28.72) to PC1 (192.168.28.71). The failover was induced after packet 368. Transfer resumes at packet 392 after ~12.6 seconds. Is it possible to reduce the failover time?

 

System Specs: ANS 22.10 driver, SFT teamed I210 adapters with default power management and default advanced settings, Windows 10.1.0 Enterprise Build 1607

vxlan: non-ECT with TOS=0x02 logs were generated

$
0
0

Hi

 

I use intel 1G and 10G ethernet card with openvswitch under vxlan overlay network.

Some VM started to use ECN and I found weird logs were generated about non-ECT.

 

I checked vxlan packets, but all packets had no errors. Also, vxlan outer header's ECN was normal acording to RFC6040.

 

I changed ethernet card to Broadcom, and I found there was no non-ECT log.

I need to stop wrong non-ECT logs. How should I do?

 

Please see below informations:

 

[root@compute01 ~]# grep vxlan /var/log/messages | tail

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:32:57 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:33:32 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 13 10:33:34 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

 

[root@compute01 ~]# uname -a

Linux compute01 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@compute01 ~]# ethtool -i eth4

driver: ixgbe

version: 4.4.0-k-rh7.4

firmware-version: 0x80000609

expansion-rom-version:

bus-info: 0000:04:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

[root@compute01 ~]# ethtool -k eth4

Features for eth4:

rx-checksumming: on

tx-checksumming: on

        tx-checksum-ipv4: off [fixed]

        tx-checksum-ip-generic: on

        tx-checksum-ipv6: off [fixed]

        tx-checksum-fcoe-crc: on [fixed]

        tx-checksum-sctp: on

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

        tx-tcp-segmentation: on

        tx-tcp-ecn-segmentation: off [fixed]

        tx-tcp6-segmentation: on

        tx-tcp-mangleid-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: on [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: on

tx-sit-segmentation: on

tx-udp_tnl-segmentation: on

tx-mpls-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

busy-poll: on [fixed]

tx-gre-csum-segmentation: on

tx-udp_tnl-csum-segmentation: on

tx-gso-partial: on

tx-sctp-segmentation: off [fixed]

l2-fwd-offload: off

hw-tc-offload: off [fixed]

 

 

 

///////////////////////////////////////////////////////////////////

root@oscompute01:~# grep vxlan /var/log/syslog | tail 

Dec 13 18:53:50 oscompute01 kernel: [188924.432135] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.431957] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.432004] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:51 oscompute01 kernel: [188925.447808] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:52 oscompute01 kernel: [188926.447743] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:52 oscompute01 kernel: [188926.463339] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:54 oscompute01 kernel: [188928.463280] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:54 oscompute01 kernel: [188928.478883] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:55 oscompute01 kernel: [188929.463051] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

Dec 13 18:53:55 oscompute01 kernel: [188929.478835] vxlan: non-ECT from 192.168.2.32 with TOS=0x2

root@oscompute01:~# uname -a

Linux oscompute01 4.4.0-103-generic #126-Ubuntu SMP Mon Dec 4 16:23:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@oscompute01:~# ethtool -i eth1

driver: igb

version: 5.3.5.12

firmware-version: 1.70, 0x80000f44, 1.1752.0

expansion-rom-version:

bus-info: 0000:04:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

root@oscompute01:~# ethtool -k eth1

Features for eth1:

rx-checksumming: on

tx-checksumming: on

tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]

scatter-gather: on

tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off [requested on]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off [fixed]

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

 

 

///////////////////////////////////////////////////////////////////

[root@compute01 ~]# tail -f /var/log/messages | grep -i vxlan

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:28 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:30 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

Dec 14 11:24:30 compute01 kernel: vxlan: non-ECT from 192.168.40.55 with TOS=0x2

^C

[root@compute01 ~]# uname -a

Linux compute01 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@compute01 ~]# ethtool -i eth4

driver: ixgbe

version: 5.3.3

firmware-version: 0x800005b6, 1.1752.0

expansion-rom-version:

bus-info: 0000:04:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

[root@compute01 ~]# ethtool -k eth4

Features for eth4:

rx-checksumming: on

tx-checksumming: on

        tx-checksum-ipv4: off [fixed]

        tx-checksum-ip-generic: on

        tx-checksum-ipv6: off [fixed]

        tx-checksum-fcoe-crc: on [fixed]

        tx-checksum-sctp: on

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

        tx-tcp-segmentation: on

        tx-tcp-ecn-segmentation: off [fixed]

        tx-tcp6-segmentation: on

        tx-tcp-mangleid-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: on [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-mpls-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

busy-poll: on [fixed]

tx-gre-csum-segmentation: on

tx-udp_tnl-csum-segmentation: on

tx-gso-partial: on

tx-sctp-segmentation: off [fixed]

l2-fwd-offload: off [fixed]

hw-tc-offload: off

Intel I210-T1 and disabling MCTP

$
0
0

So I noticed the I210-T1 supports MCTP and not seeing where I can disable this, checked the firmware boot config guide and it was conspicuously blank: Upgrade, Enable, or Disable Flash with the Intel® Ethernet Flash...

 

Also anybody know if the I210-T1 hooks into IME if the onboard (which IME normally uses) is disabled, i.e. I get the feeling Intel is sneaky like that.

 

Message was edited by: Peter Thoenen - Typo

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>