Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

X520 10G - UDP packet drops

$
0
0

Hello,

 

I have a strange problem occurring with my new server which has the intel x520 network cards. This server runs Ubuntu KVM with virtual machines that collect netflow data- these device are virtual appliances. I noticed within the virtual machine that netflow datagrams. I first suspected resource constraints within KVM however after disabling all of the virtual appliances (except one) the packet drops were still occurring. I am able to determine the packet drops by simply taking a pcap and running it in a script with tshark. Each UDP datagram that contains the netflow data has a sequence number in the application layer header. I then ran a pcap on the host bridge interface and found that the issue is occurring outside the virtual machine. I then did a packet capture on physical interface to rule out that it was caused by the bridge and the issue is occurring there too. I then mirrored the interface to a separate server that has a similar 10gig network card and this device is able to capture each packet.

 

I suspect this is a network card offload /drive issue however I have not been successful in finding a solution. I tried upgrading the driver from 4.0.1 to 4.3.13 with no difference.

 

I am running out of options.

 

I tried disabling

for i in rx tx sg tso ufo gso gro lro; do sudo ethtool -K enp33s0f1 $i off; done

 

 

 

Tried increasing memory allocation to receive memory buffer:

 

cat /proc/sys/net/core/rmem_default

212992

cat /proc/sys/net/core/rmem_max

212992

 

and set the values to:

 

net.core.rmem_default = 16777216

net.core.rmem_max = 16777216

 

 

I noticed packets were incrementing the below counters in ethtool -S so i tried the below modprobe command but it did not help.

 

hw_rsc_aggregated: 4074

hw_rsc_flushed: 1837

 

modprobe ixgbe InterruptThrottleRate=off

 

 

 

 

 


server with issue:

lspci |grep Ethernet

0c:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

0c:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

21:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

21:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

62:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

62:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

63:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

63:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)

 

I first tried with this below driver version.

version: 4.0.1-k

 

Then upgraded and tried the below driver version.

version: 4.3.13

 

 

 

server without issue:

[em621d@shdw01vmh ~]$ lspci |grep Ethernet

0e:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)

0e:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)

version: 3.6.7-k

 

 

 

Traffic characteristics:

 

Under 150Mbps

Rate 10k pps

Packet size 1340 - 1372 bytes

 

 

 

ethtool -S enp33s0f1

NIC statistics:

     rx_packets: 728170

     tx_packets: 8

     rx_bytes: 1007244560

     tx_bytes: 648

     rx_errors: 0

     tx_errors: 0

     rx_dropped: 0

     tx_dropped: 0

     multicast: 254

     collisions: 0

     rx_over_errors: 0

     rx_crc_errors: 0

     rx_frame_errors: 0

     rx_fifo_errors: 0

     rx_missed_errors: 0

     tx_aborted_errors: 0

     tx_carrier_errors: 0

     tx_fifo_errors: 0

     tx_heartbeat_errors: 0

     rx_pkts_nic: 728170

     tx_pkts_nic: 8

     rx_bytes_nic: 1010157240

     tx_bytes_nic: 680

     lsc_int: 1

     tx_busy: 0

     non_eop_descs: 0

     broadcast: 63

     rx_no_buffer_count: 0

     tx_timeout_count: 0

     tx_restart_queue: 0

     rx_long_length_errors: 0

     rx_short_length_errors: 0

     tx_flow_control_xon: 0

     rx_flow_control_xon: 0

     tx_flow_control_xoff: 0

     rx_flow_control_xoff: 0

     rx_csum_offload_errors: 0

     alloc_rx_page_failed: 0

     alloc_rx_buff_failed: 0

     lro_aggregated: 0

     lro_flushed: 0

     rx_no_dma_resources: 0

     hw_rsc_aggregated: 0

     hw_rsc_flushed: 0

     fdir_match: 0

     fdir_miss: 727925

     fdir_overflow: 0

     fcoe_bad_fccrc: 0

     fcoe_last_errors: 0

     rx_fcoe_dropped: 0

     rx_fcoe_packets: 0

     rx_fcoe_dwords: 0

     fcoe_noddp: 0

     fcoe_noddp_ext_buff: 0

     tx_fcoe_packets: 0

     tx_fcoe_dwords: 0

     os2bmc_rx_by_bmc: 0

     os2bmc_tx_by_bmc: 0

     os2bmc_tx_by_host: 0

     os2bmc_rx_by_host: 0

     tx_hwtstamp_timeouts: 0

     rx_hwtstamp_cleared: 0

     tx_queue_0_packets: 0

     tx_queue_0_bytes: 0

     tx_queue_0_bp_napi_yield: 0

     tx_queue_0_bp_misses: 0

     tx_queue_0_bp_cleaned: 0

     tx_queue_1_packets: 0

     tx_queue_1_bytes: 0

     tx_queue_1_bp_napi_yield: 0

     tx_queue_1_bp_misses: 0

     tx_queue_1_bp_cleaned: 0

     tx_queue_2_packets: 0

     tx_queue_2_bytes: 0

     tx_queue_2_bp_napi_yield: 0

     tx_queue_2_bp_misses: 0

     tx_queue_2_bp_cleaned: 0

     tx_queue_3_packets: 0

     tx_queue_3_bytes: 0

     tx_queue_3_bp_napi_yield: 0

     tx_queue_3_bp_misses: 0

     tx_queue_3_bp_cleaned: 0

     tx_queue_4_packets: 0

     tx_queue_4_bytes: 0

     tx_queue_4_bp_napi_yield: 0

     tx_queue_4_bp_misses: 0

     tx_queue_4_bp_cleaned: 0

     tx_queue_5_packets: 0

     tx_queue_5_bytes: 0

     tx_queue_5_bp_napi_yield: 0

     tx_queue_5_bp_misses: 0

     tx_queue_5_bp_cleaned: 0

     tx_queue_6_packets: 0

     tx_queue_6_bytes: 0

     tx_queue_6_bp_napi_yield: 0

     tx_queue_6_bp_misses: 0

     tx_queue_6_bp_cleaned: 0

     tx_queue_7_packets: 0

     tx_queue_7_bytes: 0

     tx_queue_7_bp_napi_yield: 0

     tx_queue_7_bp_misses: 0

     tx_queue_7_bp_cleaned: 0

     tx_queue_8_packets: 0

     tx_queue_8_bytes: 0

     tx_queue_8_bp_napi_yield: 0

     tx_queue_8_bp_misses: 0

     tx_queue_8_bp_cleaned: 0

     tx_queue_9_packets: 0

     tx_queue_9_bytes: 0

     tx_queue_9_bp_napi_yield: 0

     tx_queue_9_bp_misses: 0

     tx_queue_9_bp_cleaned: 0

     tx_queue_10_packets: 0

     tx_queue_10_bytes: 0

     tx_queue_10_bp_napi_yield: 0

     tx_queue_10_bp_misses: 0

     tx_queue_10_bp_cleaned: 0

     tx_queue_11_packets: 0

     tx_queue_11_bytes: 0

     tx_queue_11_bp_napi_yield: 0

     tx_queue_11_bp_misses: 0

     tx_queue_11_bp_cleaned: 0

     tx_queue_12_packets: 0

     tx_queue_12_bytes: 0

     tx_queue_12_bp_napi_yield: 0

     tx_queue_12_bp_misses: 0

     tx_queue_12_bp_cleaned: 0

     tx_queue_13_packets: 0

     tx_queue_13_bytes: 0

     tx_queue_13_bp_napi_yield: 0

     tx_queue_13_bp_misses: 0

     tx_queue_13_bp_cleaned: 0

     tx_queue_14_packets: 0

     tx_queue_14_bytes: 0

     tx_queue_14_bp_napi_yield: 0

     tx_queue_14_bp_misses: 0

     tx_queue_14_bp_cleaned: 0

     tx_queue_15_packets: 0

     tx_queue_15_bytes: 0

     tx_queue_15_bp_napi_yield: 0

     tx_queue_15_bp_misses: 0

     tx_queue_15_bp_cleaned: 0

     tx_queue_16_packets: 0

     tx_queue_16_bytes: 0

     tx_queue_16_bp_napi_yield: 0

     tx_queue_16_bp_misses: 0

     tx_queue_16_bp_cleaned: 0

     tx_queue_17_packets: 0

     tx_queue_17_bytes: 0

     tx_queue_17_bp_napi_yield: 0

     tx_queue_17_bp_misses: 0

     tx_queue_17_bp_cleaned: 0

     tx_queue_18_packets: 0

     tx_queue_18_bytes: 0

     tx_queue_18_bp_napi_yield: 0

     tx_queue_18_bp_misses: 0

     tx_queue_18_bp_cleaned: 0

     tx_queue_19_packets: 0

     tx_queue_19_bytes: 0

     tx_queue_19_bp_napi_yield: 0

     tx_queue_19_bp_misses: 0

     tx_queue_19_bp_cleaned: 0

     tx_queue_20_packets: 0

     tx_queue_20_bytes: 0

     tx_queue_20_bp_napi_yield: 0

     tx_queue_20_bp_misses: 0

     tx_queue_20_bp_cleaned: 0

     tx_queue_21_packets: 0

     tx_queue_21_bytes: 0

     tx_queue_21_bp_napi_yield: 0

     tx_queue_21_bp_misses: 0

     tx_queue_21_bp_cleaned: 0

     tx_queue_22_packets: 0

     tx_queue_22_bytes: 0

     tx_queue_22_bp_napi_yield: 0

     tx_queue_22_bp_misses: 0

     tx_queue_22_bp_cleaned: 0

     tx_queue_23_packets: 0

     tx_queue_23_bytes: 0

     tx_queue_23_bp_napi_yield: 0

     tx_queue_23_bp_misses: 0

     tx_queue_23_bp_cleaned: 0

     tx_queue_24_packets: 0

     tx_queue_24_bytes: 0

     tx_queue_24_bp_napi_yield: 0

     tx_queue_24_bp_misses: 0

     tx_queue_24_bp_cleaned: 0

     tx_queue_25_packets: 0

     tx_queue_25_bytes: 0

     tx_queue_25_bp_napi_yield: 0

     tx_queue_25_bp_misses: 0

     tx_queue_25_bp_cleaned: 0

     tx_queue_26_packets: 0

     tx_queue_26_bytes: 0

     tx_queue_26_bp_napi_yield: 0

     tx_queue_26_bp_misses: 0

     tx_queue_26_bp_cleaned: 0

     tx_queue_27_packets: 8

     tx_queue_27_bytes: 648

     tx_queue_27_bp_napi_yield: 0

     tx_queue_27_bp_misses: 0

     tx_queue_27_bp_cleaned: 0

     tx_queue_28_packets: 0

     tx_queue_28_bytes: 0

     tx_queue_28_bp_napi_yield: 0

     tx_queue_28_bp_misses: 0

     tx_queue_28_bp_cleaned: 0

     tx_queue_29_packets: 0

     tx_queue_29_bytes: 0

     tx_queue_29_bp_napi_yield: 0

     tx_queue_29_bp_misses: 0

     tx_queue_29_bp_cleaned: 0

     tx_queue_30_packets: 0

     tx_queue_30_bytes: 0

     tx_queue_30_bp_napi_yield: 0

     tx_queue_30_bp_misses: 0

     tx_queue_30_bp_cleaned: 0

     tx_queue_31_packets: 0

     tx_queue_31_bytes: 0

     tx_queue_31_bp_napi_yield: 0

     tx_queue_31_bp_misses: 0

     tx_queue_31_bp_cleaned: 0

     tx_queue_32_packets: 0

     tx_queue_32_bytes: 0

     tx_queue_32_bp_napi_yield: 0

     tx_queue_32_bp_misses: 0

     tx_queue_32_bp_cleaned: 0

     tx_queue_33_packets: 0

     tx_queue_33_bytes: 0

     tx_queue_33_bp_napi_yield: 0

     tx_queue_33_bp_misses: 0

     tx_queue_33_bp_cleaned: 0

     tx_queue_34_packets: 0

     tx_queue_34_bytes: 0

     tx_queue_34_bp_napi_yield: 0

     tx_queue_34_bp_misses: 0

     tx_queue_34_bp_cleaned: 0

     tx_queue_35_packets: 0

     tx_queue_35_bytes: 0

     tx_queue_35_bp_napi_yield: 0

     tx_queue_35_bp_misses: 0

     tx_queue_35_bp_cleaned: 0

     tx_queue_36_packets: 0

     tx_queue_36_bytes: 0

     tx_queue_36_bp_napi_yield: 0

     tx_queue_36_bp_misses: 0

     tx_queue_36_bp_cleaned: 0

     tx_queue_37_packets: 0

     tx_queue_37_bytes: 0

     tx_queue_37_bp_napi_yield: 0

     tx_queue_37_bp_misses: 0

     tx_queue_37_bp_cleaned: 0

     tx_queue_38_packets: 0

     tx_queue_38_bytes: 0

     tx_queue_38_bp_napi_yield: 0

     tx_queue_38_bp_misses: 0

     tx_queue_38_bp_cleaned: 0

     tx_queue_39_packets: 0

     tx_queue_39_bytes: 0

     tx_queue_39_bp_napi_yield: 0

     tx_queue_39_bp_misses: 0

     tx_queue_39_bp_cleaned: 0

     tx_queue_40_packets: 0

     tx_queue_40_bytes: 0

     tx_queue_40_bp_napi_yield: 0

     tx_queue_40_bp_misses: 0

     tx_queue_40_bp_cleaned: 0

     tx_queue_41_packets: 0

     tx_queue_41_bytes: 0

     tx_queue_41_bp_napi_yield: 0

     tx_queue_41_bp_misses: 0

     tx_queue_41_bp_cleaned: 0

     tx_queue_42_packets: 0

     tx_queue_42_bytes: 0

     tx_queue_42_bp_napi_yield: 0

     tx_queue_42_bp_misses: 0

     tx_queue_42_bp_cleaned: 0

     tx_queue_43_packets: 0

     tx_queue_43_bytes: 0

     tx_queue_43_bp_napi_yield: 0

     tx_queue_43_bp_misses: 0

     tx_queue_43_bp_cleaned: 0

     tx_queue_44_packets: 0

     tx_queue_44_bytes: 0

     tx_queue_44_bp_napi_yield: 0

     tx_queue_44_bp_misses: 0

     tx_queue_44_bp_cleaned: 0

     tx_queue_45_packets: 0

     tx_queue_45_bytes: 0

     tx_queue_45_bp_napi_yield: 0

     tx_queue_45_bp_misses: 0

     tx_queue_45_bp_cleaned: 0

     tx_queue_46_packets: 0

     tx_queue_46_bytes: 0

     tx_queue_46_bp_napi_yield: 0

     tx_queue_46_bp_misses: 0

     tx_queue_46_bp_cleaned: 0

     tx_queue_47_packets: 0

     tx_queue_47_bytes: 0

     tx_queue_47_bp_napi_yield: 0

     tx_queue_47_bp_misses: 0

     tx_queue_47_bp_cleaned: 0

     tx_queue_48_packets: 0

     tx_queue_48_bytes: 0

     tx_queue_48_bp_napi_yield: 0

     tx_queue_48_bp_misses: 0

     tx_queue_48_bp_cleaned: 0

     tx_queue_49_packets: 0

     tx_queue_49_bytes: 0

     tx_queue_49_bp_napi_yield: 0

     tx_queue_49_bp_misses: 0

     tx_queue_49_bp_cleaned: 0

     tx_queue_50_packets: 0

     tx_queue_50_bytes: 0

     tx_queue_50_bp_napi_yield: 0

     tx_queue_50_bp_misses: 0

     tx_queue_50_bp_cleaned: 0

     tx_queue_51_packets: 0

     tx_queue_51_bytes: 0

     tx_queue_51_bp_napi_yield: 0

     tx_queue_51_bp_misses: 0

     tx_queue_51_bp_cleaned: 0

     tx_queue_52_packets: 0

     tx_queue_52_bytes: 0

     tx_queue_52_bp_napi_yield: 0

     tx_queue_52_bp_misses: 0

     tx_queue_52_bp_cleaned: 0

     tx_queue_53_packets: 0

     tx_queue_53_bytes: 0

     tx_queue_53_bp_napi_yield: 0

     tx_queue_53_bp_misses: 0

     tx_queue_53_bp_cleaned: 0

     tx_queue_54_packets: 0

     tx_queue_54_bytes: 0

     tx_queue_54_bp_napi_yield: 0

     tx_queue_54_bp_misses: 0

     tx_queue_54_bp_cleaned: 0

     tx_queue_55_packets: 0

     tx_queue_55_bytes: 0

     tx_queue_55_bp_napi_yield: 0

     tx_queue_55_bp_misses: 0

     tx_queue_55_bp_cleaned: 0

     tx_queue_56_packets: 0

     tx_queue_56_bytes: 0

     tx_queue_56_bp_napi_yield: 0

     tx_queue_56_bp_misses: 0

     tx_queue_56_bp_cleaned: 0

     tx_queue_57_packets: 0

     tx_queue_57_bytes: 0

     tx_queue_57_bp_napi_yield: 0

     tx_queue_57_bp_misses: 0

     tx_queue_57_bp_cleaned: 0

     tx_queue_58_packets: 0

     tx_queue_58_bytes: 0

     tx_queue_58_bp_napi_yield: 0

     tx_queue_58_bp_misses: 0

     tx_queue_58_bp_cleaned: 0

     tx_queue_59_packets: 0

     tx_queue_59_bytes: 0

     tx_queue_59_bp_napi_yield: 0

     tx_queue_59_bp_misses: 0

     tx_queue_59_bp_cleaned: 0

     tx_queue_60_packets: 0

     tx_queue_60_bytes: 0

     tx_queue_60_bp_napi_yield: 0

     tx_queue_60_bp_misses: 0

     tx_queue_60_bp_cleaned: 0

     tx_queue_61_packets: 0

     tx_queue_61_bytes: 0

     tx_queue_61_bp_napi_yield: 0

     tx_queue_61_bp_misses: 0

     tx_queue_61_bp_cleaned: 0

     tx_queue_62_packets: 0

     tx_queue_62_bytes: 0

     tx_queue_62_bp_napi_yield: 0

     tx_queue_62_bp_misses: 0

     tx_queue_62_bp_cleaned: 0

     tx_queue_63_packets: 0

     tx_queue_63_bytes: 0

     tx_queue_63_bp_napi_yield: 0

     tx_queue_63_bp_misses: 0

     tx_queue_63_bp_cleaned: 0

     tx_queue_64_packets: 0

     tx_queue_64_bytes: 0

     tx_queue_64_bp_napi_yield: 0

     tx_queue_64_bp_misses: 0

     tx_queue_64_bp_cleaned: 0

     tx_queue_65_packets: 0

     tx_queue_65_bytes: 0

     tx_queue_65_bp_napi_yield: 0

     tx_queue_65_bp_misses: 0

     tx_queue_65_bp_cleaned: 0

     tx_queue_66_packets: 0

     tx_queue_66_bytes: 0

     tx_queue_66_bp_napi_yield: 0

     tx_queue_66_bp_misses: 0

     tx_queue_66_bp_cleaned: 0

     tx_queue_67_packets: 0

     tx_queue_67_bytes: 0

     tx_queue_67_bp_napi_yield: 0

     tx_queue_67_bp_misses: 0

     tx_queue_67_bp_cleaned: 0

     tx_queue_68_packets: 0

     tx_queue_68_bytes: 0

     tx_queue_68_bp_napi_yield: 0

     tx_queue_68_bp_misses: 0

     tx_queue_68_bp_cleaned: 0

     tx_queue_69_packets: 0

     tx_queue_69_bytes: 0

     tx_queue_69_bp_napi_yield: 0

     tx_queue_69_bp_misses: 0

     tx_queue_69_bp_cleaned: 0

     tx_queue_70_packets: 0

     tx_queue_70_bytes: 0

     tx_queue_70_bp_napi_yield: 0

     tx_queue_70_bp_misses: 0

     tx_queue_70_bp_cleaned: 0

     rx_queue_0_packets: 9482

     rx_queue_0_bytes: 11643129

     rx_queue_0_bp_poll_yield: 0

     rx_queue_0_bp_misses: 0

     rx_queue_0_bp_cleaned: 0

     rx_queue_1_packets: 165724

     rx_queue_1_bytes: 232486518

     rx_queue_1_bp_poll_yield: 0

     rx_queue_1_bp_misses: 0

     rx_queue_1_bp_cleaned: 0

     rx_queue_2_packets: 80657

     rx_queue_2_bytes: 112241119

     rx_queue_2_bp_poll_yield: 0

     rx_queue_2_bp_misses: 0

     rx_queue_2_bp_cleaned: 0

     rx_queue_3_packets: 46645

     rx_queue_3_bytes: 64303361

     rx_queue_3_bp_poll_yield: 0

     rx_queue_3_bp_misses: 0

     rx_queue_3_bp_cleaned: 0

     rx_queue_4_packets: 39907

     rx_queue_4_bytes: 55708359

     rx_queue_4_bp_poll_yield: 0

     rx_queue_4_bp_misses: 0

     rx_queue_4_bp_cleaned: 0

     rx_queue_5_packets: 57077

     rx_queue_5_bytes: 77940273

     rx_queue_5_bp_poll_yield: 0

     rx_queue_5_bp_misses: 0

     rx_queue_5_bp_cleaned: 0

     rx_queue_6_packets: 12543

     rx_queue_6_bytes: 17162117

     rx_queue_6_bp_poll_yield: 0

     rx_queue_6_bp_misses: 0

     rx_queue_6_bp_cleaned: 0

     rx_queue_7_packets: 45791

     rx_queue_7_bytes: 63757988

     rx_queue_7_bp_poll_yield: 0

     rx_queue_7_bp_misses: 0

     rx_queue_7_bp_cleaned: 0

     rx_queue_8_packets: 10118

     rx_queue_8_bytes: 13687646

     rx_queue_8_bp_poll_yield: 0

     rx_queue_8_bp_misses: 0

     rx_queue_8_bp_cleaned: 0

     rx_queue_9_packets: 69990

     rx_queue_9_bytes: 97934717

     rx_queue_9_bp_poll_yield: 0

     rx_queue_9_bp_misses: 0

     rx_queue_9_bp_cleaned: 0

     rx_queue_10_packets: 5588

     rx_queue_10_bytes: 7125456

     rx_queue_10_bp_poll_yield: 0

     rx_queue_10_bp_misses: 0

     rx_queue_10_bp_cleaned: 0

     rx_queue_11_packets: 73009

     rx_queue_11_bytes: 101591101

     rx_queue_11_bp_poll_yield: 0

     rx_queue_11_bp_misses: 0

     rx_queue_11_bp_cleaned: 0

     rx_queue_12_packets: 6666

     rx_queue_12_bytes: 8709930

     rx_queue_12_bp_poll_yield: 0

     rx_queue_12_bp_misses: 0

     rx_queue_12_bp_cleaned: 0

     rx_queue_13_packets: 86238

     rx_queue_13_bytes: 119855689

     rx_queue_13_bp_poll_yield: 0

     rx_queue_13_bp_misses: 0

     rx_queue_13_bp_cleaned: 0

     rx_queue_14_packets: 10094

     rx_queue_14_bytes: 12339328

     rx_queue_14_bp_poll_yield: 0

     rx_queue_14_bp_misses: 0

     rx_queue_14_bp_cleaned: 0

     rx_queue_15_packets: 8641

     rx_queue_15_bytes: 10757829

     rx_queue_15_bp_poll_yield: 0

     rx_queue_15_bp_misses: 0

     rx_queue_15_bp_cleaned: 0

     rx_queue_16_packets: 0

     rx_queue_16_bytes: 0

     rx_queue_16_bp_poll_yield: 0

     rx_queue_16_bp_misses: 0

     rx_queue_16_bp_cleaned: 0

     rx_queue_17_packets: 0

     rx_queue_17_bytes: 0

     rx_queue_17_bp_poll_yield: 0

     rx_queue_17_bp_misses: 0

     rx_queue_17_bp_cleaned: 0

     rx_queue_18_packets: 0

     rx_queue_18_bytes: 0

     rx_queue_18_bp_poll_yield: 0

     rx_queue_18_bp_misses: 0

     rx_queue_18_bp_cleaned: 0

     rx_queue_19_packets: 0

     rx_queue_19_bytes: 0

     rx_queue_19_bp_poll_yield: 0

     rx_queue_19_bp_misses: 0

     rx_queue_19_bp_cleaned: 0

     rx_queue_20_packets: 0

     rx_queue_20_bytes: 0

     rx_queue_20_bp_poll_yield: 0

     rx_queue_20_bp_misses: 0

     rx_queue_20_bp_cleaned: 0

     rx_queue_21_packets: 0

     rx_queue_21_bytes: 0

     rx_queue_21_bp_poll_yield: 0

     rx_queue_21_bp_misses: 0

     rx_queue_21_bp_cleaned: 0

     rx_queue_22_packets: 0

     rx_queue_22_bytes: 0

     rx_queue_22_bp_poll_yield: 0

     rx_queue_22_bp_misses: 0

     rx_queue_22_bp_cleaned: 0

     rx_queue_23_packets: 0

     rx_queue_23_bytes: 0

     rx_queue_23_bp_poll_yield: 0

     rx_queue_23_bp_misses: 0

     rx_queue_23_bp_cleaned: 0

     rx_queue_24_packets: 0

     rx_queue_24_bytes: 0

     rx_queue_24_bp_poll_yield: 0

     rx_queue_24_bp_misses: 0

     rx_queue_24_bp_cleaned: 0

     rx_queue_25_packets: 0

     rx_queue_25_bytes: 0

     rx_queue_25_bp_poll_yield: 0

     rx_queue_25_bp_misses: 0

     rx_queue_25_bp_cleaned: 0

     rx_queue_26_packets: 0

     rx_queue_26_bytes: 0

     rx_queue_26_bp_poll_yield: 0

     rx_queue_26_bp_misses: 0

     rx_queue_26_bp_cleaned: 0

     rx_queue_27_packets: 0

     rx_queue_27_bytes: 0

     rx_queue_27_bp_poll_yield: 0

     rx_queue_27_bp_misses: 0

     rx_queue_27_bp_cleaned: 0

     rx_queue_28_packets: 0

     rx_queue_28_bytes: 0

     rx_queue_28_bp_poll_yield: 0

     rx_queue_28_bp_misses: 0

     rx_queue_28_bp_cleaned: 0

     rx_queue_29_packets: 0

     rx_queue_29_bytes: 0

     rx_queue_29_bp_poll_yield: 0

     rx_queue_29_bp_misses: 0

     rx_queue_29_bp_cleaned: 0

     rx_queue_30_packets: 0

     rx_queue_30_bytes: 0

     rx_queue_30_bp_poll_yield: 0

     rx_queue_30_bp_misses: 0

     rx_queue_30_bp_cleaned: 0

     rx_queue_31_packets: 0

     rx_queue_31_bytes: 0

     rx_queue_31_bp_poll_yield: 0

     rx_queue_31_bp_misses: 0

     rx_queue_31_bp_cleaned: 0

     rx_queue_32_packets: 0

     rx_queue_32_bytes: 0

     rx_queue_32_bp_poll_yield: 0

     rx_queue_32_bp_misses: 0

     rx_queue_32_bp_cleaned: 0

     rx_queue_33_packets: 0

     rx_queue_33_bytes: 0

     rx_queue_33_bp_poll_yield: 0

     rx_queue_33_bp_misses: 0

     rx_queue_33_bp_cleaned: 0

     rx_queue_34_packets: 0

     rx_queue_34_bytes: 0

     rx_queue_34_bp_poll_yield: 0

     rx_queue_34_bp_misses: 0

     rx_queue_34_bp_cleaned: 0

     rx_queue_35_packets: 0

     rx_queue_35_bytes: 0

     rx_queue_35_bp_poll_yield: 0

     rx_queue_35_bp_misses: 0

     rx_queue_35_bp_cleaned: 0

     rx_queue_36_packets: 0

     rx_queue_36_bytes: 0

     rx_queue_36_bp_poll_yield: 0

     rx_queue_36_bp_misses: 0

     rx_queue_36_bp_cleaned: 0

     rx_queue_37_packets: 0

     rx_queue_37_bytes: 0

     rx_queue_37_bp_poll_yield: 0

     rx_queue_37_bp_misses: 0

     rx_queue_37_bp_cleaned: 0

     rx_queue_38_packets: 0

     rx_queue_38_bytes: 0

     rx_queue_38_bp_poll_yield: 0

     rx_queue_38_bp_misses: 0

     rx_queue_38_bp_cleaned: 0

     rx_queue_39_packets: 0

     rx_queue_39_bytes: 0

     rx_queue_39_bp_poll_yield: 0

     rx_queue_39_bp_misses: 0

     rx_queue_39_bp_cleaned: 0

     rx_queue_40_packets: 0

     rx_queue_40_bytes: 0

     rx_queue_40_bp_poll_yield: 0

     rx_queue_40_bp_misses: 0

     rx_queue_40_bp_cleaned: 0

     rx_queue_41_packets: 0

     rx_queue_41_bytes: 0

     rx_queue_41_bp_poll_yield: 0

     rx_queue_41_bp_misses: 0

     rx_queue_41_bp_cleaned: 0

     rx_queue_42_packets: 0

     rx_queue_42_bytes: 0

     rx_queue_42_bp_poll_yield: 0

     rx_queue_42_bp_misses: 0

     rx_queue_42_bp_cleaned: 0

     rx_queue_43_packets: 0

     rx_queue_43_bytes: 0

     rx_queue_43_bp_poll_yield: 0

     rx_queue_43_bp_misses: 0

     rx_queue_43_bp_cleaned: 0

     rx_queue_44_packets: 0

     rx_queue_44_bytes: 0

     rx_queue_44_bp_poll_yield: 0

     rx_queue_44_bp_misses: 0

     rx_queue_44_bp_cleaned: 0

     rx_queue_45_packets: 0

     rx_queue_45_bytes: 0

     rx_queue_45_bp_poll_yield: 0

     rx_queue_45_bp_misses: 0

     rx_queue_45_bp_cleaned: 0

     rx_queue_46_packets: 0

     rx_queue_46_bytes: 0

     rx_queue_46_bp_poll_yield: 0

     rx_queue_46_bp_misses: 0

     rx_queue_46_bp_cleaned: 0

     rx_queue_47_packets: 0

     rx_queue_47_bytes: 0

     rx_queue_47_bp_poll_yield: 0

     rx_queue_47_bp_misses: 0

     rx_queue_47_bp_cleaned: 0

     rx_queue_48_packets: 0

     rx_queue_48_bytes: 0

     rx_queue_48_bp_poll_yield: 0

     rx_queue_48_bp_misses: 0

     rx_queue_48_bp_cleaned: 0

     rx_queue_49_packets: 0

     rx_queue_49_bytes: 0

     rx_queue_49_bp_poll_yield: 0

     rx_queue_49_bp_misses: 0

     rx_queue_49_bp_cleaned: 0

     rx_queue_50_packets: 0

     rx_queue_50_bytes: 0

     rx_queue_50_bp_poll_yield: 0

     rx_queue_50_bp_misses: 0

     rx_queue_50_bp_cleaned: 0

     rx_queue_51_packets: 0

     rx_queue_51_bytes: 0

     rx_queue_51_bp_poll_yield: 0

     rx_queue_51_bp_misses: 0

     rx_queue_51_bp_cleaned: 0

     rx_queue_52_packets: 0

     rx_queue_52_bytes: 0

     rx_queue_52_bp_poll_yield: 0

     rx_queue_52_bp_misses: 0

     rx_queue_52_bp_cleaned: 0

     rx_queue_53_packets: 0

     rx_queue_53_bytes: 0

     rx_queue_53_bp_poll_yield: 0

     rx_queue_53_bp_misses: 0

     rx_queue_53_bp_cleaned: 0

     rx_queue_54_packets: 0

     rx_queue_54_bytes: 0

     rx_queue_54_bp_poll_yield: 0

     rx_queue_54_bp_misses: 0

     rx_queue_54_bp_cleaned: 0

     rx_queue_55_packets: 0

     rx_queue_55_bytes: 0

     rx_queue_55_bp_poll_yield: 0

     rx_queue_55_bp_misses: 0

     rx_queue_55_bp_cleaned: 0

     rx_queue_56_packets: 0

     rx_queue_56_bytes: 0

     rx_queue_56_bp_poll_yield: 0

     rx_queue_56_bp_misses: 0

     rx_queue_56_bp_cleaned: 0

     rx_queue_57_packets: 0

     rx_queue_57_bytes: 0

     rx_queue_57_bp_poll_yield: 0

     rx_queue_57_bp_misses: 0

     rx_queue_57_bp_cleaned: 0

     rx_queue_58_packets: 0

     rx_queue_58_bytes: 0

     rx_queue_58_bp_poll_yield: 0

     rx_queue_58_bp_misses: 0

     rx_queue_58_bp_cleaned: 0

     rx_queue_59_packets: 0

     rx_queue_59_bytes: 0

     rx_queue_59_bp_poll_yield: 0

     rx_queue_59_bp_misses: 0

     rx_queue_59_bp_cleaned: 0

     rx_queue_60_packets: 0

     rx_queue_60_bytes: 0

     rx_queue_60_bp_poll_yield: 0

     rx_queue_60_bp_misses: 0

     rx_queue_60_bp_cleaned: 0

     rx_queue_61_packets: 0

     rx_queue_61_bytes: 0

     rx_queue_61_bp_poll_yield: 0

     rx_queue_61_bp_misses: 0

     rx_queue_61_bp_cleaned: 0

     rx_queue_62_packets: 0

     rx_queue_62_bytes: 0

     rx_queue_62_bp_poll_yield: 0

     rx_queue_62_bp_misses: 0

     rx_queue_62_bp_cleaned: 0

     rx_queue_63_packets: 0

     rx_queue_63_bytes: 0

     rx_queue_63_bp_poll_yield: 0

     rx_queue_63_bp_misses: 0

     rx_queue_63_bp_cleaned: 0

     rx_queue_64_packets: 0

     rx_queue_64_bytes: 0

     rx_queue_64_bp_poll_yield: 0

     rx_queue_64_bp_misses: 0

     rx_queue_64_bp_cleaned: 0

     rx_queue_65_packets: 0

     rx_queue_65_bytes: 0

     rx_queue_65_bp_poll_yield: 0

     rx_queue_65_bp_misses: 0

     rx_queue_65_bp_cleaned: 0

     rx_queue_66_packets: 0

     rx_queue_66_bytes: 0

     rx_queue_66_bp_poll_yield: 0

     rx_queue_66_bp_misses: 0

     rx_queue_66_bp_cleaned: 0

     rx_queue_67_packets: 0

     rx_queue_67_bytes: 0

     rx_queue_67_bp_poll_yield: 0

     rx_queue_67_bp_misses: 0

     rx_queue_67_bp_cleaned: 0

     rx_queue_68_packets: 0

     rx_queue_68_bytes: 0

     rx_queue_68_bp_poll_yield: 0

     rx_queue_68_bp_misses: 0

     rx_queue_68_bp_cleaned: 0

     rx_queue_69_packets: 0

     rx_queue_69_bytes: 0

     rx_queue_69_bp_poll_yield: 0

     rx_queue_69_bp_misses: 0

     rx_queue_69_bp_cleaned: 0

     rx_queue_70_packets: 0

     rx_queue_70_bytes: 0

     rx_queue_70_bp_poll_yield: 0

     rx_queue_70_bp_misses: 0

     rx_queue_70_bp_cleaned: 0

     tx_pb_0_pxon: 0

     tx_pb_0_pxoff: 0

     tx_pb_1_pxon: 0

     tx_pb_1_pxoff: 0

     tx_pb_2_pxon: 0

     tx_pb_2_pxoff: 0

     tx_pb_3_pxon: 0

     tx_pb_3_pxoff: 0

     tx_pb_4_pxon: 0

     tx_pb_4_pxoff: 0

     tx_pb_5_pxon: 0

     tx_pb_5_pxoff: 0

     tx_pb_6_pxon: 0

     tx_pb_6_pxoff: 0

     tx_pb_7_pxon: 0

     tx_pb_7_pxoff: 0

     rx_pb_0_pxon: 0

     rx_pb_0_pxoff: 0

     rx_pb_1_pxon: 0

     rx_pb_1_pxoff: 0

     rx_pb_2_pxon: 0

     rx_pb_2_pxoff: 0

     rx_pb_3_pxon: 0

     rx_pb_3_pxoff: 0

     rx_pb_4_pxon: 0

     rx_pb_4_pxoff: 0

     rx_pb_5_pxon: 0

     rx_pb_5_pxoff: 0

     rx_pb_6_pxon: 0

     rx_pb_6_pxoff: 0

     rx_pb_7_pxon: 0

     rx_pb_7_pxoff: 0


Constant cpu usage (20%) by Intel PROSet Monitoring Service

$
0
0

Hello there,

 

I recently noticed that cpu isconstantly being used by Intel PROSet Monitoring Service. About 20%.

 

At first I noticed that Svchost.exe was using 20% but  ProcessExplorer suggested that it was Intel PROSet Monitoring Service that was running svchost that high.

 

When I stop Intel PROSet Monitoring Service, cpu goes back to 0%-1% when idle.

 

I see no problem at all after stopping the service.

 

Can you guys tell me the prupose of it and why it would contantly use 20% of my cpu?

 

French guy here so go easy on the writing critics!

 

Thanks.

SR-IOV with ixgbe - Disabling Rx VLAN filter

$
0
0

Hi All,

We trying to disable VLAN filter on Intel Corporation 82599ES 10-Gigabit Ethernet controller using ethtool without modifying the driver code.

We can able to disable successfully in the following OS and the ixgbe driver version.

Red Hat Enterprise Linux Server release 7.1 (Maipo)

Kernel : 3.10.0-229.20.1.el7.x86_64

Ixgbe driver version : 4.0.1-k-rh7.1


[root@compute05 src]#ethtool -k enp9s0f0

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: off

vlan-challenged: off [fixed]

 

[root@compute05 src]#ethtool -K enp9s0f0 rx-vlan-filter off

 

 

But we cant able to do the same on the new ixgbe driver version. We observed rx-vlan-filter as fixed.

Red Hat Enterprise Linux Server release 7.1 (Maipo)

Kernel : 3.10.0-229.20.1.el7.x86_64

Ixgbe driver  version:        4.3.13

 

[root@compute05 src]#ethtool -k  enp9s0f0

Features for enp9s0f0:

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

 

[root@compute05 src]# ethtool -K  enp9s0f0 rx-vlan-filter off

Could not change any device features

 

Any idea how to overcome this issue?

 

If any more information needed please let me know.

 

Thanks in advance.

Intel 82579LM - Windows 8.1 VLAN Disabled

$
0
0

Hello Guys

 

I have Laptop Dell Latitude E6430 and download Prosetx64 version 20.5. Post install, in VLAN Tab, add 2 VLAN (14 - 2) and in my Network Connection both show Disabled.

 

I check my switch Cisco and the port is in trunk mode. Connect a host with Windows 2012 R2 and not show error.

 

My problem is with laptop, I test with other version 18,5 and the same problem. The NIC show disabled.

 

Thanks for your help-


Sebastian

Intel X710-DA2 "device will not start" Server 2008 R2

$
0
0

Just got this card (replacing a 520 that was behaving badly), and put it in an older Dell 2970 server. Device is recognized but the driver will not start. I have tried multiple versions of the driver (20.7/20.4/19.0) all to no avail. Device shows up but i get the dreaded yellow exclamation mark. This occurs with or without SFP+ modules inserted. In the event log it just says that the device won't start and to download the latest driver. If i use an old driver i also get "the nvm is newer than expected" error, and if i use the latest i get "the nvm is older than expected". I tried to update the nvm using the tool, but the tool reports "access error" and "device not found". The one oddity about this server is that it is an AMD Opertron based server. I have seen compatibility issues in the past with other things...perhaps this would also fall into that category?

 

Has anyone seen this issue before?

 

Thanks in advance for any advice.

I217-LM network adapter problem with 100MB full duplex

$
0
0

I recently updated my Microsoft Deployment Toolkit repository with the latest cabs to support new Dell laptop.

 

After the site is updated, I started having problem with older Dell Desktop and laptop reimaged. The problem was that the network cable show disconnected after powering up the system. The wire disconnected stayed disconnected until we physicaly disconnect the network cable and reconnect it. The 100MB full duplex setting was correct. Updating to the latest version of the driver fix the connection status problem on those older hardwares.

 

We now have a persistent problem with the latest system having the I217-LM NIC onboard. The disconnected wrire status problem at boot up occured and with the latest driver downloaded from Intel we have a duplex connection problem. All our Dell switches are configured to force the connection to 100MB full duplex. We then force the NIC to 100MB full duplex. When we check the connection status in the Intel advanced tool, it shows 100mb half-duplex.

 

After having tested all the possibilities, the only way we could achieved a stable connection to 100MB full-duplex is by setting both switch port and NIC to auto-negotiation. This is not an option for us because all the PC are daisy chain with an IP phone configured to work at 100MB full-duplex and with other NIC, auto-negotiation are not working well all the time.

 

I tried with older driver version of the driver available for download and the connection status at bootup occured with all the version prior 18.7 and with 18.7 version, it is impossible to force the speed to 100MB full-duplex.

 

Is it possible to have a quick fix for this problem? I am pretty sure it is a compatibility problem with the latest harware NIC, driver and Dell switches.

 

Thanks for your attention.

82599 hardware filter to only accept UDP4 traffic sent to 53 port

$
0
0

Hello,

 

I tested hardware filters on my Ubuntu+82599 development environment and everything seemed to work great. I've further read Intel and ethtool documentation, but I've been unable to find a solution to my next question. I've got a DNS analysis tool and I would like to only accept UDP packets sent to/from 53 port (DNS request/responses) and drop everything else. In your opinion, is by any means possible to implement a hardware filter like this one (drop all non-UDP packets and not sent to 53 port) below:

 

ethtool --config-ntuple eth4 flow-type !udp4 dst-port !53 action -1

 

Thanks in advance and best regards,

Manuel Polonio

Install 82578DC Gigabit Network on 4.3.0-kali1-amd64

$
0
0

Hi you all,

I'm trying to install the 82578DC module on my 4.3 kernel however the procedure is not working properly. As you can see below.

 

root@test:~/Desktop/e1000e-3.3.3/src# make install

Makefile:67: *** Kernel header files not in any of the expected locations.

Makefile:68: *** Install the appropriate kernel development package, e.g.

Makefile:69: *** kernel-devel, for building kernel modules and try again.  Stop.

 

 

Thanks in advance.


X520 DP SFP+ causing network loop?

$
0
0

Hi,

 

yesterday one of our VMware ESXi 5.5 servers (Supermicro with X520 DP SFP+ NIC) crashed with a PSOD (Purple Screen Of Death).

 

At the exact same time when the PSOD occured, we had a total network outage of the whole VMware cluster, as both 10G switches, where the two ports of that server's NIC are connected to, had a lot of (spanning tree-) trouble (considering theirselves as the spanning tree root bridge, flapping network ports, broadcast storms and so on).

 

This outage lasted *exactly* until the moment when we pushed the reset button on that server.

We already saw the same behavior in November 2015 on the same server with the same bad consequences and also solved it by resetting the server.

 

I know that sounds really really weird but the only explanation for this behavior, that sounds reasonable for us, is, that the X520 NIC somehow turned into a kind of "bridge all traffic between the two ports"-mode causing a network bridging loop, after ESXi suddenly crashed with a PSOD.

 

Has someone ever heard of such a weird behavior or can at least somebody imagine that this could have happened?

 

I think it would be possible to manually and intentionally achieve that behavior by directly configuring the network card, but could it happen accidentally?

 

Please let me know your thoughts about it.

 

Best regards,

Christian Hertel

 

 

 

----- Some additional NIC information ------

 

~ # esxcfg-nics -l

Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description

vmnic0  0000:02:00.00 igb         Down 0Mbps     Half   00:25:90:a4:28:56 1500   Intel Corporation 82576 Gigabit Network Connection

vmnic1  0000:02:00.01 igb         Down 0Mbps     Half   00:25:90:a4:28:57 1500   Intel Corporation 82576 Gigabit Network Connection

vmnic2  0000:04:00.00 ixgbe       Up   10000Mbps Full   90:e2:ba:3a:04:2c 9000   Intel Corporation 82599 10 Gigabit Dual Port Network Connection

vmnic3  0000:04:00.01 ixgbe       Up   10000Mbps Full   90:e2:ba:3a:04:2d 9000   Intel Corporation 82599 10 Gigabit Dual Port Network Connection

 

~ # ethtool vmnic2

Settings for vmnic2:

  Supported ports: [ FIBRE ]

  Supported link modes:   1000baseT/Full

  Supports auto-negotiation: Yes

  Advertised link modes:  1000baseT/Full

  Advertised auto-negotiation: Yes

  Speed: Unknown! (10000)

  Duplex: Full

  Port: FIBRE

  PHYAD: 0

  Transceiver: external

  Auto-negotiation: on

  Supports Wake-on: d

  Wake-on: d

  Current message level: 0x00000007 (7)

  Link detected: yes

 

~ # ethtool -i vmnic2

driver: ixgbe

version: 3.21.4iov

firmware-version: 0x61c10001

bus-info: 0000:04:00.0

 

~ # ethtool -k vmnic2

Offload parameters for vmnic2:

Cannot get device udp large send offload settings: Function not implemented

Cannot get device generic segmentation offload settings: Function not implemented

rx-checksumming: on

tx-checksumming: on

scatter-gather: on

tcp segmentation offload: on

udp fragmentation offload: off

generic segmentation offload: off

CVE-2015-2291 Intel Network Adapter Diagnostic Driver IOCTL DoS

$
0
0

A vulnerability in iqvw32.sys and iqvw64e.sys drivers has been discovered in the Intel Network Adapter Driver. Intel Network Adapter Diagnostic Driver is prone to multiple local buffer-overflow vulnerabilities. 

 

An attacker can exploit these issues to crash the affected application; denying service to legitimate users. Due to the nature of this issue, code-execution may be possible but this has not been confirmed. 

 

Note: This issue was previously titled 'Intel Network Adapter Diagnostic Driver CVE-2015-2291 Multiple Remote Code Execution Vulnerabilities'. The title and technical details have been changed to better reflect the underlying component affected.

 

When will a vendor-supplied patch be available?

 

Joel

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

SR-IOV failed on Intel Xeon-D 1541's X552 10gbe NIC

$
0
0

Hi all,

 

I cannot figure out why I cannot enable SR-IOV on Intel Xeon-D 1541's X552 10gbe NIC, it must be the intel's latest ixgbe driver issue because on the same SoC board, the Intel i350 1gbe NIC's sr-iov can be enabled.

sr-iov_failed.jpg

Following is the pci device info and also its ixgbe info

root@pve1:/sys/bus/pci/devices/0000:03:00.1# lspci -vnnk -s  03:00.0

03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T [8086:15ad]

         Subsystem: Super Micro Computer Inc Device [15d9:15ad]

         Physical Slot: 0-1

         Flags: bus master, fast devsel, latency 0, IRQ 25

         Memory at fbc00000 (64-bit, prefetchable) [size=2M]

         Memory at fbe04000 (64-bit, prefetchable) [size=16K]

         Expansion ROM at 90100000 [disabled] [size=512K]

         Capabilities: [40] Power Management version 3

         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

         Capabilities: [70] MSI-X: Enable+ Count=64 Masked-

         Capabilities: [a0] Express Endpoint, MSI 00

         Capabilities: [100] Advanced Error Reporting

         Capabilities: [140] Device Serial Number 00-00-c9-ff-ff-00-00-00

         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)

         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

         Capabilities: [1b0] Access Control Services

         Capabilities: [1c0] Latency Tolerance Reporting

         Kernel driver in use: ixgbe

root@pve1:/sys/bus/pci/devices/0000:03:00.1# modinfo ixgbe

filename:       /lib/modules/4.2.8-1-pve/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko

version:        4.1.5

license:        GPL

description:    Intel(R) 10 Gigabit PCI Express Network Driver

author:         Intel Corporation, <linux.nics@intel.com>

srcversion:     9781CEF8A3110F93FF9DBA8

alias:          pci:v00008086d000015ADsv*sd*bc*sc*i*

alias:          pci:v00008086d00001560sv*sd*bc*sc*i*

alias:          pci:v00008086d00001558sv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Asv*sd*bc*sc*i*

alias:          pci:v00008086d00001557sv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Fsv*sd*bc*sc*i*

alias:          pci:v00008086d0000154Dsv*sd*bc*sc*i*

alias:          pci:v00008086d00001528sv*sd*bc*sc*i*

alias:          pci:v00008086d000010F8sv*sd*bc*sc*i*

alias:          pci:v00008086d0000151Csv*sd*bc*sc*i*

alias:          pci:v00008086d00001529sv*sd*bc*sc*i*

alias:          pci:v00008086d0000152Asv*sd*bc*sc*i*

alias:          pci:v00008086d000010F9sv*sd*bc*sc*i*

alias:          pci:v00008086d00001514sv*sd*bc*sc*i*

alias:          pci:v00008086d00001507sv*sd*bc*sc*i*

alias:          pci:v00008086d000010FBsv*sd*bc*sc*i*

alias:          pci:v00008086d00001517sv*sd*bc*sc*i*

alias:          pci:v00008086d000010FCsv*sd*bc*sc*i*

alias:          pci:v00008086d000010F7sv*sd*bc*sc*i*

alias:          pci:v00008086d00001508sv*sd*bc*sc*i*

alias:          pci:v00008086d000010DBsv*sd*bc*sc*i*

alias:          pci:v00008086d000010F4sv*sd*bc*sc*i*

alias:          pci:v00008086d000010E1sv*sd*bc*sc*i*

alias:          pci:v00008086d000010F1sv*sd*bc*sc*i*

alias:          pci:v00008086d000010ECsv*sd*bc*sc*i*

alias:          pci:v00008086d000010DDsv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Bsv*sd*bc*sc*i*

alias:          pci:v00008086d000010C8sv*sd*bc*sc*i*

alias:          pci:v00008086d000010C7sv*sd*bc*sc*i*

alias:          pci:v00008086d000010C6sv*sd*bc*sc*i*

alias:          pci:v00008086d000010B6sv*sd*bc*sc*i*

depends:        ptp,dca,vxlan

vermagic:       4.2.8-1-pve SMP mod_unload modversions

parm:           InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)

parm:           IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)

parm:           MQ:Disable or enable Multiple Queues, default 1 (array of int)

parm:           DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)

parm:           RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)

parm:           VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default=8) (array of int)

parm:           max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)

parm:           VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)

parm:           InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)

parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)

parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)

parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)

parm:           LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)

parm:           LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)

parm:           FdirPballoc:Flow Director packet buffer allocation level:

                         1 = 8k hash filters or 2k perfect filters

                         2 = 16k hash filters or 4k perfect filters

                         3 = 32k hash filters or 8k perfect filters (array of int)

parm:           AtrSampleRate:Software ATR Tx packet sample rate (array of int)

parm:           FCoE:Disable or enable FCoE Offload, default 1 (array of int)

parm:           LRO:Large Receive Offload (0,1), default 1 = on (array of int)

parm:           allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)

parm:           dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int)

parm:           vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int)

82599 VF in promiscuous mode

$
0
0

I am trying to run a bridge inside a VM and would like the VF interface to be in promiscuous mode so that it can receive packets for any unicast MAC address.  I am currently unable to get this to work and have found some interesting threads about promiscuous mode not working in a VF.  Mainly these:

 

Intel i350: promiscuous mode and SR-IOV

promisc function does not work with VF in XL710 Network card

 

However this thread talks about setting all ones in the UTA in order to enable promiscuous mode:

 

ixgbe 3.21.2 driver for 82599

 

So some questions - is this supported?  Seems there is a bit of conflicting information out there.  Even if not by the published driver, is it supported by the hardware?  The driver companion guide doesn't really have enough information about how to make this work or explain why or where the all 1s thing would work.  I have tried and have not been able to get it to work.  I'd be curious to understand what the performance implication is.

 

I have the ability to peek/poke registers and manipulate the driver if need be.  I can upgrade if need be but I've looked at the differences in the drivers around this area and can't see anything different enough to warrant upgrading.

ixgbe driver does not support MAC address change

$
0
0

My ESXi 5.5 system has Intel X540-t2 installed, and using ixgbe driver version 3.21.

 

In the driver code, I can see the following lines, which shows it does not support MAC address change in ESXi 5.5, because if vfinfo[vf].pf_set_mac=true, when guest OS change its MAC, the ixgbe driver won't change it in vfinfo accordingly and will report error: VF attempted to set a new MAC address but it already has an administratively set MAC address

 

int ixgbe_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)

{

  s32 retval = 0;

  struct ixgbe_adapter *adapter = netdev_priv(netdev);

  if (!is_valid_ether_addr(mac) || (vf >= adapter->num_vfs))

      return -EINVAL;

  dev_info(pci_dev_to_dev(adapter->pdev), "setting MAC %pM on VF %d\n", mac, vf);

  dev_info(pci_dev_to_dev(adapter->pdev), "Reload the VF driver to make this change effective.\n");

  retval = ixgbe_set_vf_mac(adapter, vf, mac);

  if (retval >= 0) {

      /* pf_set_mac is used in ESX5.1 and base driver but not in ESX5.5 */

      adapter->vfinfo[vf].pf_set_mac = true;

      if (test_bit(__IXGBE_DOWN, &adapter->state)) {

          dev_warn(pci_dev_to_dev(adapter->pdev), "The VF MAC address has been set, but the PF device is not up.\n");

          dev_warn(pci_dev_to_dev(adapter->pdev), "Bring the PF device up before attempting to use the VF device.\n");

      }

  } else {

      dev_warn(pci_dev_to_dev(adapter->pdev), "The VF MAC address was NOT set due to invalid or duplicate MAC address.\n");

  }

  return retval;

}

 

static int ixgbe_set_vf_mac_addr(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)

{

  u8 *new_mac = ((u8 *)(&msgbuf[1]));

 

  if (!is_valid_ether_addr(new_mac)) {

      e_warn(drv, "VF %d attempted to set invalid mac\n", vf);

      return -1;

  }

 

  if (adapter->vfinfo[vf].pf_set_mac && memcmp(adapter->vfinfo[vf].vf_mac_addresses, new_mac, ETH_ALEN)) {

      u8 *pm = adapter->vfinfo[vf].vf_mac_addresses;

      e_warn(drv,  "VF %d attempted to set a new MAC address but it already has an administratively set MAC address %2.2X:%2.2X:%2.2X:%2.2X:%2.2X:%2.2X\n",

                            vf, pm[0], pm[1], pm[2], pm[3], pm[4], pm[5]);

      e_warn(drv, "Check the VF driver and if it is not using the correct MAC address you may need to reload the VF driver\n");

      return -1;

  }

  return ixgbe_set_vf_mac(adapter, vf, new_mac) < 0;

}

 

However, according to the VMware document, it should be supported.   Why this contradiction happens?

Intel Boot Agent - CL and GE

$
0
0

Hello,

 

Can someone point me in the right direction or a KB article or explain what is the different between the different Intel Boot Agents.

 

Recently I've been seeing the following:

 

CL

GE

FE

 

Is there a matrix that explains the differences?

 

Or could be be product developments, FE became GE which became CL  (or something like that)

 

Thanks


Bad impressions of the network card after driver update.

$
0
0

I use original network card Intel 82574L (EXPI9301CTBLK). After the upgrade, with the built-in "Windows 7 x64" driver 11.0.5.22 to the last 12.7.28.0 there are delays in downloading web pages. There is a general slowdown. Why are there new drivers slow down? After roll back to version 11.0.5.22 everything becomes normal.

 

PS And in the driver confused modes FullDuplex (Half) and HalfDuplex (Full).

XL710 poll-mode fix in PF driver incomplete?

$
0
0

We have a polling XL710 VF driver and have found the appropriate poll-mode workaround in the DPDK. We are however not using the DPDK and are relying on the accompanying fix made to the latest Intel PF Linux drivers eg  version 1.3.49. However this fix does not work and we believe it is incomplete. The part we are referring to involves the clearing of the DIS_AUTOMASK_N flag in the GLINT_CTL register. The code in the above release (and earlier ones) is: (i40e_virtchnl_pf.c: 344)

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING) &&

        (vector_id == 0)) {

        reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

            reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;

            wr32(hw, I40E_GLINT_CTL, reg);

        }

We believe this should say:

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING)

       && (vector_id == 1) {

       reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

          reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK |

                    I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK);

           wr32(hw, I40E_GLINT_CTL, reg);

 

        }

    }

With the above changes the fix then works.

The addition of the I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK is as per the datasheet S 8.3.3.1.4.2.

The test for vector_id == 1 is because the default MSIX vector is 1. However there is a good argument for removing this test altogether since the vector involved depends on the VF implementation. Note that the fix in the DPDK eliminates this test.

 

We would appreciate it if you could verify the above and make changes to the released PF driver.

XL710 - BIOS/Initialization Error - The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.

$
0
0

Hi,

We have installed XL710 40GbE B1/Rev 02 NICs in one of the servers. We have upgraded the NIC drivers with latest available i40e driver version i.e. 1.4.25.

We are seeing BIOS error like this and observing the same for every reboot of the machine.

 

  • The driver for the device detected a new version of the NVM Image than expected.  Please install the most recent version of the network driver.   

 

But actually we are using correct and latest version of driver that is 1.4.25This error is not observed in dmesg logs and happening in only at the time of boot time screeNs as part of driver and firmware initialization.

Can someone help us in resolving the issue? Attached dmesg logs from the server as an attachment.

 

We are using Cent OS 7.1 and it is happening on both HP as well as DELL servers.

 

===Snippet from dmesg===========

[    3.431409] i40e 0000:03:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    3.841578] i40e 0000:03:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    7.258924] i40e 0000:81:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    7.798504] i40e 0000:81:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    8.205727] i40e 0000:82:00.0: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

[    8.614473] i40e 0000:82:00.1: fw 5.0.40043 api 1.5 nvm 5.02 0x80002285 0.0.0

 

03:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

03:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

81:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

81:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 02)

82:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 01)

82:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] (rev 01)

 

Thanks & Regards,

Suresh.

IBIS Model for 82599 NIC

$
0
0

Hi, I'd like to do Signal Integrity simulation  for "JL82599ES SR1VN" PCIe.

but could'nt find an IBIS model on the site,

 

do you have a reference?

Thanks.

I210 AVB Timers

$
0
0

Hello All,

I am not sure if this is the correct forum to post my question and I would be thankful if someone can point me to the correct one if the question is not relevant.

The question I want to ask is related to the Intel i210-AT chip which I am using for my Industrial solution. I am writing a new Linux driver for the External NIC, which has the i210 chip, for handling packets in a time critical network.

I am using the Time SYNC (section 7.8 in the Intel® Ethernet Controller I210 Datasheet) registers i.e. SYSTIM, TRGTTIML/H0 and TRGTTIML/H1 to create timers using the method explained in "7.8.3.3.1 SYSTIM Synchronized Level Change Generation on SDP Pins"

The timers created are used by the Linux drivers to synchronize the packet transmission in the SR mode (i.e. using launchTime).

I have completed the implementation and the setup works fine if the timers are created in a non-periodic sequence. The problem arises if I try to reset the values or increment the timeout for the timers periodically where the period is lesser than the timeout values.

I am not sure if I have presented my problem correctly here. I would be happy to share other details to clarify the implementation details.

 

Regards,

Gaurav

 

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>