Intel Ethernet Quad Port Server Adapter I340-T4 이 모델과
Intel Corporation 82580 Gigabit Network Connection아래의 모델이 동일 한 모델인지 궁금합니다.
Intel Ethernet Quad Port Server Adapter I340-T4 이 모델과
Intel Corporation 82580 Gigabit Network Connection아래의 모델이 동일 한 모델인지 궁금합니다.
I currently have a high speed trading network setup and the machines that have the Intel® 82579LM integrated are experiencing insane packetloss. Started off with 2000 packets lost per 10 minutes. I have tried all the solutions suggested within the previous threads about this issue. I have updated the BIOS. The machines we use are HP - z220(230). Any machines that use a different model of intel NIC`s dont experience this problem. I have installed the latest drivers and Tweaked the power options on the nics down to nothing. I also increased the input and output buffer size to the maximum setting which does give me a smaller packet loss rate (70 packets per 10 min) but its still not good enough as this is a trading network we cannont afford packet loss. I have done extensive troubleshooting on our network and the CISCO switches we use. They are all in top shape . The issue is within the Intel® 82579LM nic itself and how it reacts to UDP multicast traffic.
Any suggestions i can implement to lessen the packet loss count on the network? with so many of those nics on the floor im dropping rougly 900000 packets a day it is way to much.
P.S. Just as an example the Intel I217-LM in the last 24 hours has dropped 15 packets comparing to the Intel® 82579LM dropping roughly 9-10k packets per 24 hours.
<><> Apologies if this has been posted in the wrong place <><>
Ok I'll keep this brief.
Problem:- LAN regulary disconnects and reconnets 30 seconds later.
Symptons:- Loose connection to Lan / Internet for around 30 seconds.
Background:- Had this problem with both P8P67 B2 and P8P67 pro boards. I have several other computers connected to the switch (not hub) and they are working fine. Have replaced the lead to no avail. Have even used the same lead in several other computers which works fine.
Config:-
Study
Netgear 8 port SWITCH 10/100/1000
Server
VOIP
PC
Switch connected to lounge hard wired via outdoor sheilded cat6 lead.
Lounge
Netgear 8 port SWITCH 10/100/1000
Router
XBOX 360
Wii
<><><>
Message in System Event Logs;-
<><>
Warning message - date time - source = e1cexpress
Event ID = 27
Intel 82579V Gigabit Network Connection
- Network link is disconnected
<><>
Then it states its connected again.
Have also tried the following;-
Remove Kaspersky 2011
Ensure ALL power management even OS is disabled
Use IPV4 instead of IPv6 in prefix policies
Disable nativue IPv6
Disable tunnel IPv6
Disable IPv6
netsh interface tcp set global rss=disabled
netsh interface tcp set global autotuninglevel=disabled
netsh int ip set global taskoffload=disabled
Disabled SNP;-
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Tcpip\Parameters]
EnableTCPChimney=dword:00000000
EnableTCPA=dword:00000000
EnableRSS=dword:00000000
Have tried driver from Asus MB CD, Asus Website, Your Website, Windows Update all to no avail.
Please help.
Hello!
We are looking for the Intel Network Adapters for our project (high-speed packet processing using the DPDK framework).
Intel X710-DA2 NICs satisfy our requirements, but we have a question regarding their connection possibilities.
QUESTION:
We would like to connect Intel X710-DA2 NIC (SFP+) to a switch with XFP 10G interfaces.
There is a number of XFP-SFP+ adapters (for example, HP 10G X244 XFP to SFP+ 3m Direct Attach Copper Cable), but according to the "Intel Ethernet Adapters and Devices User Guide", only SFP+ modules made by Intel are supported. It looks like there is no XFP-SFP+ adapter made by Intel.
Is there any way to link the Intel X710-DA2 NIC to the XFP interface of the switch?
What can you recommend in this case?
--
Best regards,
Andrew
i am trying to bring up the BDX DE platform with the internal 10 GB ethernet using pktgen(DPDK) with the driver igb_uio on Red Hat Linux.
but i coudn't able to invoke the Pktgen(DPDK) to send or receive the packets.
i am getting the following messages with the dmesg command in the linux command promt
ixgbe 0000:03:00.0: eth2: Fake Tx hang detected with timeout of 20 seconds
ixgbe 0000:03:00.1: eth3: Fake Tx hang detected with timeout of 20 seconds
ixgbe 0000:03:00.0: eth2: Fake Tx hang detected with timeout of 20 seconds
ixgbe 0000:03:00.1: eth3: Fake Tx hang detected with timeout of 20 seconds
ixgbe 0000:03:00.0: eth2: Fake Tx hang detected with timeout of 20 seconds
ixgbe 0000:03:00.1: eth3: Fake Tx hang detected with timeout of 20 seconds
Please help me here to resolve this issue.
Hello,
I'm looking for someone who have experience with XT540-T2 and hp server.
I would like to put X540-T2 in 2 server HP 320E gen8 v2.
Thnaks by advance for your help
Oliver
Hello community,
I am using two x520 Intel CNA to create VN2VN FCoE connection.
ON the target side I have created the FCoE interfaces on my CNA port and I have created LUNs. I am using Linux as a target with kernel version 3.10.54. Open FCoE utils and LIO target modules are installed,
I am using VMware 5.1 as an initiator. I followed the instructions provided by Intel and VMware on how to configure FCoE on VMware,, but I can't see the LUNs. The instructions are here: vSphere Documentation Center.
I am using direct connection from port to port (VN2VN) using a 10Gb cable between two ports( one on initiator side one on the target side). Both CNAs are the same x520 series Intel CNAs
So:
1. I have installed the latest driver for this specific CNA
2. I created a vSphere standard switch
3. Added the network interface that supports FCoE to that switch
4. Entered the VLAN ID
5. Added the software FCoE adapters
But when I rescan the device I don't see any LUNs.
I have also tried Windows server and Linux CentOS as initiators and I didn't have any problems. Using Windows and CentOS as FCoE initiators I was able to see the LUNs on the target.
Please, Help. Are there any special settings for VMware as an initiator or are there any special settings for VN2VN connections? Basically how can I see the LUNs on my WMware initiator?
Thank you
Hi
I have a question surrounding the native NIC Teaming in Windows Server 2012 R2 with regards to the i350 Quad Port adapters.
In our Hyper-V implementation, we have 3 x quad port NICs per Hyper-V Cluster node.
With two of these adapters, we have balanced a Switch Independent / Dynamic (Sum of Queues mode) team across 4 of these ports for our VM Switch. Here's the thing... two of these ports are on one physical adapter and the other two ports are in the second physical adapter. Each port has VMQ enabled with suitable processor core allocations for a Sum of Queues team. VMQ is required as the team is connected to a VM Switch which is then supporting a reasonable number of Virtual Machines.
The remaining 4 ports in these two adapters (2 in one card and 2 in the other) are then allocated to iSCSI MPIO use for access to our SAN. These remaining ports have RSS enabled.
The questions I have are as follows:
A) What is Intels Stance when mixing RSS and VMQ modes on different ports, on the same physical adapter?
B) Is splitting a VMQ enabled NIC team between ports from physical NICs supported by Intel when using the native Windows NIC Teaming in Server 2012 R2?
We are seeing intermittent latency issues and packet loss from within virtual machines with VMQ enabled. Disabling VMQ does work around the issue, however this isn't really a solution.
Are there any known issues with these adapters when using Virtual Machine Queues?
Kind Regards
Matt
I had to reinstall Windows XP Home edition on my desktop PC. It is Mesh Computer. It has Asus motherboard P4R800-VM - RV 1.03.
But it has Intel Pentium Chip IXP200 with CPU IXP200. A Radeon Graphics Card.
I did not get any separate CD for the drivers from MESH and the system came pre-installed on the PC.
Now I am seeing the missing drivers as there are big yellow question marks on the devices in the deice manager for the following:
Ethernet Controller
Multimedia Audio Controller
SM Bus Controller
Video Controller
For this reason I have no internet connection to update or register with microsoft. Please could you put me in right direction to get these missing drivers.
I am complete novice and would really appreciate the guidance regarding from where to get these drivers and how to recognise them ie which ones
are the correct one for my system to download.
Cannot contact MESH Computers here in the UK for drivers as they have gone into liquidation long time ago.
Please help.
Hello,
We have just purchased 8 Intel® Ethernet Converged Network Adapter X710-DA4 and we have installed them in our Dell R610 VMware 5.5 hosts. The cards at first seem to work well and preform well however we now have an issue where the links on all of our hots flap. The nic resets one port at random time intervals and then comes right back into service. We see the same behavior on all of our ESXi Hosts. These cards are being connected to a Cisco Nexus 5700 switch using Twinax Copper Cables. We talked to VMware and they seem to think its a firmware issue or something on the card. Both Dell and Cisco have looked at the server and network hardware and do not see any issues. Does anyone have any suggestions?
Thanks
Intel x710
hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.
my specs
Intel(R) 82579LM Gigabit Network Connection
windows 7 SP1 32bit + lastest windows update
i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.
I've seen a couple of postings in this forum (each going back a year or more) with responses from Intel suggesting that the two drivers are tightly coupled due to the messaging/mailbox interface. Is there a general table somewhere that shows the correlation between an ixgbe versions and the corresponding ixgbevf version? Can this be inferred from the release dates of the stable versions on SourceForge (e.g. ixgbe 3.19.1 and ixgbevf 2.12.1 both released 12-20-2013)?
Right now I'm running my linux distro's ixgbevf 2.7.12 in VMs, against 3.19.1on the server. Performance is excellent, but I've seen some anomalies regarding VF link status if one of the two physical ports isn't cabled to a switch (ethtool & ixgbe report correct link status, but ethtool & ixgbevf get it wrong for some VF instances in the VMs). To me this seems like a possible driver compatibility issue.
Thanks & best regards,
Chris
I have a EVGA socket 1156 micro-ATX motherboard. It has Intel's P55 V chipset with their 82578dc onboard NIC. When cold booting into windows 7 x64, the device encounters error code 10 (device will not start). To get it to work, I have to manually go into Windows device manager and disable the NIC and then enable it. As you can imagine, this is not an enduring solution. With a Google search, I've discovered this error/problem is quite prevalent with intel onboard NICs. There are no viable solutions (at least none from Intel). Most are community developed workarounds (ex. scripts to automatically disable and enable the NIC).
Other notes:
What I've found:
I believe this to be a BIOS to windows hand off issue (IE. Intel driver issue). During system POST, the BIOS assigns the NIC the typical Ethernet IRQ 11. In windows, with the code 10 error, no other device I have is assigned IRQ 11, so there shouldn't be a conflict causing the device to not start or use that IRQ. After disabling/enabling the NIC however, it is assigned IRQ 4294967294 (displayed as "-2" in the device manager resources), which seems quite odd. So I think IRQ assignment is where the problem lies. This is where I am hung up. Users used to have a way to manually assign IRQs, but IRQ issues have (supposedly) become extinct, so I can't find any options on trying to force an IRQ assignment.
Help:
Has this issue been resolved yet? All the forum threads I've seen on this issue have been dead ends/abandoned. Does this IRQ assignment oddity spark any ideas? Has anyone tried forcing IRQ assignments for the NIC in the registry?
I just built this computer yesterday and have updated all BIOS and Drivers along the Motherboard I am connecting to a linksys wrt 1900AC with a Cat6 cable that was used on my older computer(connecting at 1Gbps) I cannot think of anything else and I do not have any other NIC cards to install in a PCI slot to find out if it's just the onboard nic. I have set the card to auto-negotiation and 100.00 Mbps full duplex is all I get I tried to force the 1Gbps connection and windows tells me that I have a cable unplugged until I change it back. Jumbo Packets are disabled. I have been trying to find anyone with the same problem online and I have seen a couple of posts not related to this specific card.
Thanks in advance for any help that you provide.
Hello
I just purchased a couple of Intel X540-T2 10GBE cards. Unfortunately my Asus X87 Pro motherboard will not boot with either of these NIC. I updated the BIOS of the motherboard to the latest version. I tried differrent PCIe slots (but the two main slots are PCIe 3.0 16x ports so should support this card. No other PCIe card connected on the motherboard.
The motherboard doesn't really show any error message (the number on the LED is either "40" or "55"). No BIOS appearing on screen. The motherboard is powered by a powerful PSU and only has a few SSDs connected so it doesn't look like a power issue. The CPU is a i7-4771, no overclocking or anything.
Has anyone experience similar issues with a Asus motherboard? I can see that some people are experiencing a similar problem here:
http://forums.smallnetbuilder.com/showthread.php?t=18406
Any idea if all Intel RJ45 10GBE are affected or if some models are compatible with Asus boards?
thanks
Charles
Hi,
some time ago I tried using Intel i350 with Debian Wheezy:
yyyy@xxxx:~/igb-5.2.9.4/src$ uname -a
Linux xxxx 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
yyyy@xxxx:/home/yyyy/igb-5.2.9.4/src# modinfo igb
filename: /lib/modules/3.2.0-4-amd64/kernel/drivers/net/igb/igb.ko
version: 5.2.9.4
license: GPL
description: Intel(R) Gigabit Ethernet Network Driver
author: Intel Corporation, <e1000-devel@lists.sourceforge.net>
srcversion: E377200391EBF74638FEDA2
yyyy@xxxx:/home/yyyy/igb-5.2.9.4/src# ip -V
ip utility, iproute2-ss140804
I was using bonding in 802.3ad mode for bonding with an Extreme-Networks switch. I had 2 similar servers, with identical configuration. Old one was using old e1000 interfaces with 802.3ad bonding, l3+l4 hash. New one was using I350-T2 dual port adapter with the same configuration. No Virtual Machines. Issue was on I350 around 20% packets to/from some random source/destination IPs were dropped. Servers were actually cloned so I'm certain there was no misconfiguration. I've tried disabling anti spoof check (https://communities.intel.com/message/192668#192668) but no luck, it doesn't work on my iproute2 and kernel (has this been fixed yet?). Any hints how to compile driver without anti spoof? Did anyone experienced similar symptoms?
In the end the fix was not to use bonding at all. While using 2 separate ports on this I350-T2 adapter, with the same VLANs, the same IPs - everything works flawless. This is why I assume a problem with igb driver on I350-T2 using bonding (with VLANs).
UPDATE: actually I've found Latest Flexible Port Partitioning Paper is now available. Learn about QoS and SR-IOV! which confirms there's an issue with 802.3ad. Can anyone give me a hint how to compile igb driver without anti spoofing feature?
Regards,
Wiadomość była edytowana przez: Bartek Krawczyk
Hello.
With Intel Lancard (Intel® pro/1000 PT Dual port server adapter),
We used it for Packet Mirroring System.
When Promiscuous mode was set and IP address was not set to that card.
It seems that intel adapter driver write the log every 5 or 6 minutes. (ETW Logging).
This logging uses tremendous CPU resource so the hole system was taking effect by it.
(We generally received about 500~1GB data /sec)
We tried everything for this problem, and finally we knew that when the adapter set IP address,
it is not write any log about it and the system performance was great.
At this time, we would like to know if there's any KB or any other workaround about this.
Because we must suggest to our customers that set IP address for this LAN card for the system performance.
Any advice is welcome.
Thank you for reading this.
B/R hopi
k4solution inc.,
www.k4solution.com
Hello
I have a
windows 7 64 bit lenovo thinkCentre m73 and a Intel x520-da2 card the os sees the card and says "Cannot start" the card is currently in a pci express 16 slot not a pci express 8.
is there any config I need to do to get this working?
In our setup we have a physical server hosting several virtual machine guests.
All guests are assigned two interfaces which are bonded together for redundancy support.
GUESTS communicate over the native VLAN (bond0) and one or more extra vlan interfaces (bond0.x).
The setup is build with
It took some effort to get this setup up and running. Especially the bonding part. I would like to share our experience for the benefit of the community. I'm not a specialist in this domain and I have no experience with other setups (e.g. other hardware or OS).
I skip the process to enable SR-IOV in the Bios. This will most likely be different on your platform anyhow. See the HP document referenced below for details on HP proliant servers.
We use kernel kernel-2.6.32-431.20.3.el6.x86_64
The kernel parameters intel_iommu=on pci=realloc intremap=no_x2apic_optout are required to enable SR-IOV support.
We upgraded to the latest driver officially supported by HP. Without this driver update communication between the VM and the HOST was only possible on the native network and not on a VLAN tagged network.
kmod-hp-ixgbe-3.19.0.46-4.rhel6u5.x86_64
kmod-hp-ixgbevf-2.12.0.38-4.rhel6u5.x86_64
The kernel module parameters used are:
options ixgbe max_vfs=63,63
Virsh has support for SR-IOV and can assign a virtual function to the VM. On the HOST two networks are defined, one for eth0 and one for eth1.They use mode=hostdev which is for SR-IOV support. Below is the definition for eth0. The one for eth1 is similar.
cat /etc/libvirt/qemu/networks/autostart/passthrough_eth0.xml
<network>
<name>passthrough_eth0</name>
<uuid>4bbbf5e2-7b80-7cf9-c667-50bb711f2e4c</uuid>
<forward mode='hostdev' managed='yes'>
<pf dev='eth0'/>
</forward></network>
Assign two network interfaces to the GUEST, one from passthrough_eth0 and one from passthrough_eth1.
<domain type="kvm">
...
<devices>
...
<interface type="network">
<mac address="52:54:00:bb:f7:8f"/>
<source network="passthrough_eth0"/>
</interface>
<interface type="network">
<mac address="52:54:00:47:ce:4f"/>
<source network="passthrough_eth1"/>
</interface>
</devices>
</domain>
This was the most tricky part to get right.
Starting with the HOST. We rely on the link state of the interface to trigger the failover. eth0 is the preferred interface. This is to make sure that eth0 is active when possible. The updelay of 30 seconds is the time needed by the switch port to come in the spanning tree forwarding state. Therefore we wait 30 seconds before using eth0 as the active interface when it becomes (again) available. The resulting bonding configuration stored in /etc/modprobe.d/bonding.conf is:
alias bond0 bonding
options bonding mode=1 miimon=100 primary=eth0 updelay=30000
Take into account:
In the end we use the same bonding config on the GUEST:
alias bond0 bonding
options bonding mode=1 miimon=100 primary=eth0 updelay=30000
After virsh has started the GUEST, we have a script on the HOST that updates the MAC address of eth1 of the GUEST.
After all this configuration, what is now the end result?
Communication between two VMs works both over the native as tagged VLAN interfaces. Unplugging a network cable will cause a bonding failover and all communication resumes. Restoring the network cable will force all bonds back to eth0 after 30 seconds.
VM to HOST communication also works as expected (native, tagged VLAN, bonding failover).
But keep in mind that all machines (HOST and all GUESTS) must use the same active interface.
What still doesn't work is communication between a GUEST using SR-IOV and a GUEST connected to the bridge on the HOST. An SR-IOV GUEST and a bridged GUEST can't communicate with each other. Each GUEST can communicate with the HOST or the outside world (native and tagged VLAN), but they can't communicate to each other. Ethernet broadcast packets (e.g. ARP requests) arrive in the bridged GUEST, but unicast ethernet frames don't arrive in the bridged GUEST. Packets from the bridged GUEST to the SR-IOV guest appear to work as expected.
Maybe someone has a solution to this problem...
Hi.
Is any way to change MAC address on VF inside VM without touching host?
Is it possible to add additional MAC addresses which NIC's driver pass through VF
When MAC inside VM is changed via
$ sudo ip link set eth0 address fa:16:55:55:55:55
Log message appears in host system:
Dec 24 15:08:29 localhost kernel: [685436.750320] ixgbe 0000:01:00.0 p5p1: VF 3 attempted to override administratively set MAC address
Dec 24 15:08:29 localhost kernel: [685436.750320] Reload the VF driver to resume operations
and traffic stops flow through eth0. Then I should change MAC on VF in host to fix this
$ sudo ip link set p5p1 vf 3 mac fa:16:55:55:55:55
Log message appears in host system:
Dec 25 15:12:14 localhost kernel: [685661.582284] ixgbe 0000:01:00.0: setting MAC fa:16:55:55:55:55 on VF 3
Dec 25 15:12:14 localhost kernel: [685661.582291] ixgbe 0000:01:00.0: Reload the VF driver to make this change effective.
and traffic starts flow again.
I have same bonding use case and issues as described in GUEST with bonding and VLAN support on CentOS 6.5 with Intel SR-IOV
We use Intel 10G NICs
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Host OS is ubuntu 14.04
$ uname -rv
3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 17:53:56 UTC 2014
Guests are Fedora 20
$ ip link sh dev p5p1 |
3: p5p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 0c:c4:7a:1e:a9:5c brd ff:ff:ff:ff:ff:ff | |
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto | |
vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto | |
vf 2 MAC fe:c4:ea:e8:d0:73, spoof checking on, link-state auto | |
vf 3 MAC fa:16:11:11:22:22, vlan 1338, spoof checking off, link-state auto | |
vf 4 MAC fa:16:3e:e9:9e:fe, vlan 1338, spoof checking on, link-state auto |
$ ethtool -i p5p1
driver: ixgbe
version: 3.15.1-k
firmware-version: 0x80000208
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no