Hi,
Has anyone successfully used the Intel X520 DA2 10GE adapter to a Cisco Nexus switch using the Cisco SFP+ 7m active cables? The Cisco part number of the cable is SFP-H10GB-ACU7M.
We have around 80 of these NICs, two per server, in HP DL380 G7 running VMware ESXi 4.1 (Releasebuild-582267) and while the link comes up OK, on approximately 6 of the servers we see ESX log vmnicx: NIC Link is Down immediately followed by vmnicx: NIC Link is Up 10Gbps. In the worst case we're seeing these messages every 60-seconds or so on the server, but we don't see any similar link down/up on the Cisco switch.
I''ve been looking at the quoted specifications for both the Intel NIC and the Cisco switch and have found the following.
In the Intel note Which SFP+ modules, SFP modules, and cables can I use with the X520 Series? there's a question "What are the SFP+ direct attach copper cable requirements for the Intel Ethernet Server Adapter X520 series" with the answer stating "Any SFP+ passive or active limiting direct attach copper cable that comply with the SFF-8431 v4.1 and SFF-8472 v10.4 specifications".
When I look at the Cisco web site regarding the specifications their cables are built to, their document Cisco 10GBASE SFP+ Modules Data Sheet states the standards supported are the SFP+ MSA SFF-8431 (Optical Modules, Active Optical Cables, and Passive Twinax cables) and SFP+ MSA 8461 (Active Twinax cables). Additionally their Twinax Cables Certification Matrix for Cisco Nexus 2000 and Nexus 5000 Series Switches shows the only supported twinax cables over 5m are their own Cisco cables.
Does this mean that the Intel NIC and the Cisco switches do not support a common standard for active SFP+ cables or have I misunderstood the documentation.
Thanks in advance.