Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

X710 10GbE SFP card, i40e and NIC Link is Down due to DCB init failed and tx_timeout

$
0
0

Hi,

 

We run CentOS 7.4 with kernel 4.9.x on HP hardware and noticed that few server got their network interfaces marked down by the kernel. In the logs we saw a lot of

reports for DCB init failed -53, disabled, TX driver issue detected, PF reset issued and eth0: tx_timeout: VSI_seid followed by marking the link down.

 

Here is the full log:

2017-10-04T15:50:29.908202+02:00kernel: i40e 0000:04:00.1 eth0: tx_timeout recovery level 1, hung_queue 11

2017-10-04T15:50:30.061686+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:30.061693+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:50:36.085291+02:00kernel: i40e 0000:04:00.1 eth0: tx_timeout: VSI_seid: 388, Q 2, NTC: 0x20, HWB: 0x20, NTU: 0x100, TAIL: 0x100, INT: 0x0

2017-10-04T15:50:36.085295+02:00kernel: i40e 0000:04:00.1 eth0: tx_timeout recovery level 2, hung_queue 2

2017-10-04T15:50:39.328928+02:00kernel: i40e 0000:04:00.0: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:39.328936+02:00kernel: i40e 0000:04:00.0: DCB init failed -53, disabled

2017-10-04T15:50:39.637232+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:39.637237+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:50:40.111808+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:50:40.788697+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:40.788702+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:50:46.839994+02:00kernel: i40e 0000:04:00.1 eth0: tx_timeout: VSI_seid: 388, Q 11, NTC: 0x54, HWB: 0x54, NTU: 0xed, TAIL: 0xed, INT: 0x1

2017-10-04T15:50:46.839998+02:00kernel: i40e 0000:04:00.1 eth0: tx_timeout recovery level 3, hung_queue 11

2017-10-04T15:50:50.119447+02:00kernel: i40e 0000:04:00.0: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:50.119455+02:00kernel: i40e 0000:04:00.0: DCB init failed -53, disabled

2017-10-04T15:50:50.301798+02:00kernel: i40e 0000:04:00.0 eth1: NIC Link is Down

2017-10-04T15:50:50.423744+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:50:50.423752+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:50:50.600812+02:00kernel: i40e 0000:04:00.1 eth0: NIC Link is Down

2017-10-04T15:50:50.764799+02:00kernel: i40e 0000:04:00.1 eth0: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

2017-10-04T15:50:53.234804+02:00kernel: i40e 0000:04:00.0 eth1: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None

2017-10-04T15:51:17.201808+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:17.783439+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:17.783447+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:18.392805+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:18.814970+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:18.814978+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:19.436807+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:19.767258+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:19.767265+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:20.440800+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:20.793083+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:20.793091+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:21.471805+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:21.810807+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:21.810811+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:22.468707+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:22.772829+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:22.772833+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:23.411802+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:23.796867+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:23.796872+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:24.440800+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:24.758945+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:24.758950+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:25.411806+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:25.782778+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:25.782781+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:26.417804+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:26.804559+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:26.804568+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:27.448800+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:27.765882+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:27.765889+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:33.187800+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:33.784824+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:33.784827+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:34.340383+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:34.810411+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:34.810415+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:35.350800+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-04T15:51:35.769594+02:00kernel: i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-04T15:51:35.769600+02:00kernel: i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-04T15:51:36.404803+02:00kernel: i40e 0000:04:00.1: TX driver issue detected, PF reset issued

 

The firmware  5.60 0x800033b1 1.1752.0.

 

Because this issue occurred many times we decided to downgrade the firmware to 5.60 0x80002dac 1.1618.0.

We have been running this firmware for 20hours and so far the issue hasn't reoccurred. But, we still see reports for DCB init failed, here is a part of the log:

2017-10-06T07:36:04.508245+02:00 kernel: [60714.891133] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:04.508253+02:00 kernel: [60714.941154] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:07.485087+02:00 kernel: [60717.910685] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:08.544822+02:00 kernel: [60718.922177] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:08.544826+02:00 kernel: [60718.976268] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:09.662086+02:00 kernel: [60720.087544] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:10.526650+02:00 kernel: [60720.906953] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:10.526657+02:00 kernel: [60720.957855] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:12.127091+02:00 kernel: [60722.553258] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:12.509188+02:00 kernel: [60722.891523] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:12.509193+02:00 kernel: [60722.941451] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:14.542083+02:00 kernel: [60724.968613] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:15.515736+02:00 kernel: [60725.898114] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:15.515742+02:00 kernel: [60725.948320] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:17.054084+02:00 kernel: [60727.482217] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:17.499895+02:00 kernel: [60727.881722] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:17.499899+02:00 kernel: [60727.931928] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:23.073089+02:00 kernel: [60733.498949] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:23.519433+02:00 kernel: [60733.898115] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:23.519440+02:00 kernel: [60733.950592] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:23.750083+02:00 kernel: [60734.175558] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:24.542845+02:00 kernel: [60734.922501] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:24.542851+02:00 kernel: [60734.973989] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:25.804083+02:00 kernel: [60736.229381] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:26.527743+02:00 kernel: [60736.906173] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:26.527750+02:00 kernel: [60736.958286] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:26.761082+02:00 kernel: [60737.185843] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:27.549959+02:00 kernel: [60737.930406] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:27.549965+02:00 kernel: [60737.981294] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:28.730084+02:00 kernel: [60739.156229] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:29.127699+02:00 kernel: [60739.509889] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:29.127706+02:00 kernel: [60739.559362] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:30.195079+02:00 kernel: [60740.620866] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:30.494200+02:00 kernel: [60740.874560] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:30.494206+02:00 kernel: [60740.924833] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:32.206081+02:00 kernel: [60742.632196] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:32.542615+02:00 kernel: [60742.922219] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:32.542621+02:00 kernel: [60742.974168] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:33.418078+02:00 kernel: [60743.842578] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:34.523784+02:00 kernel: [60744.905169] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:34.523792+02:00 kernel: [60744.955104] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:35.379092+02:00 kernel: [60745.805674] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:35.546831+02:00 kernel: [60745.928123] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:35.546837+02:00 kernel: [60745.978287] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:37.173085+02:00 kernel: [60747.597806] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:37.505649+02:00 kernel: [60747.884064] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:37.505655+02:00 kernel: [60747.935578] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:38.903089+02:00 kernel: [60749.323330] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:39.518842+02:00 kernel: [60749.897537] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:39.518849+02:00 kernel: [60749.949198] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:40.001094+02:00 kernel: [60750.425300] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:40.294976+02:00 kernel: [60750.672969] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:40.294982+02:00 kernel: [60750.725401] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:41.763084+02:00 kernel: [60752.187674] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:42.522862+02:00 kernel: [60752.903896] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:42.522869+02:00 kernel: [60752.953498] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:43.326092+02:00 kernel: [60753.751729] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:43.548946+02:00 kernel: [60753.929795] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:43.548952+02:00 kernel: [60753.980056] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:45.499095+02:00 kernel: [60755.925140] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:45.750231+02:00 kernel: [60756.131802] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:45.750238+02:00 kernel: [60756.181360] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:47.697084+02:00 kernel: [60758.122317] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:48.538715+02:00 kernel: [60758.919704] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:48.538721+02:00 kernel: [60758.969756] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-06T07:36:48.774079+02:00 kernel: [60759.199913] i40e 0000:04:00.1: TX driver issue detected, PF reset issued

2017-10-06T07:36:49.499160+02:00 kernel: [60759.880666] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

2017-10-06T07:36:49.499168+02:00 kernel: [60759.930263] i40e 0000:04:00.1: DCB init failed -53, disabled

 

Furthermore, we noticed a kernel crash, see below:

2017-10-05T17:54:26.526390+02:00 kernel: [11418.560560] i40e 0000:04:00.1: DCB init failed -53, disabled

2017-10-05T17:54:32.481310+02:00 kernel: [11424.483864] ------------[ cut here ]------------

2017-10-05T17:54:32.481314+02:00 kernel: [11424.504705] WARNING: CPU: 3 PID: 0 at net/sched/sch_generic.c:316 dev_watchdog+0x232/0x240

2017-10-05T17:54:32.481315+02:00 kernel: [11424.541886] NETDEV WATCHDOG: north (i40e): transmit queue 11 timed out

2017-10-05T17:54:33.507030+02:00 kernel: [11424.571387] Modules linked in: sctp_diag sctp dccp_diag dccp unix_diag udp_diag tcp_diag inet_diag 8021q garp mrp xfs libcrc32c loop vfat fat sb_edac edac_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd iTCO_wdt intel_cstate iTCO_vendor_support intel_rapl_perf i2c_i801 lpc_ich pcspkr mfd_core hpwdt hpilo i2c_smbus fjes sg wmi ipmi_si acpi_power_meter ipmi_msghandler ioatdma shpchp ip_tables ext4 jbd2 mbcache sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ixgbe ttm mdio i40e dca tg3 ptp crc32c_intel drm pps_core hpsa scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod

2017-10-05T17:54:33.507035+02:00 kernel: [11424.874760] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.9.52-1.booking.el7.x86_64 #1

2017-10-05T17:54:33.507036+02:00 kernel: [11424.910445] Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 04/25/2017

2017-10-05T17:54:33.507038+02:00 kernel: [11424.947905]  ffff880c4fcc3db0 ffffffff81363cdc ffff880c4fcc3e00 0000000000000000

2017-10-05T17:54:33.507043+02:00 kernel: [11424.981294]  ffff880c4fcc3df0 ffffffff81082441 0000013c00000246 000000000000000b

2017-10-05T17:54:33.507045+02:00 kernel: [11425.014606]  ffff880c48743000 0000000000000040 ffff88184623cf40 0000000000000003

2017-10-05T17:54:33.507045+02:00 kernel: [11425.047951] Call Trace:

2017-10-05T17:54:33.507047+02:00 kernel: [11425.059172]  <IRQ>

2017-10-05T17:54:33.507047+02:00 kernel: [11425.067975]  [<ffffffff81363cdc>] dump_stack+0x63/0x87

2017-10-05T17:54:33.507048+02:00 kernel: [11425.091645]  [<ffffffff81082441>] __warn+0xd1/0xf0

2017-10-05T17:54:33.507050+02:00 kernel: [11425.113566]  [<ffffffff810824bf>] warn_slowpath_fmt+0x5f/0x80

2017-10-05T17:54:33.507051+02:00 kernel: [11425.139919]  [<ffffffff81657882>] dev_watchdog+0x232/0x240

2017-10-05T17:54:33.507051+02:00 kernel: [11425.165073]  [<ffffffff81657650>] ? dev_deactivate_queue.constprop.27+0x60/0x60

2017-10-05T17:54:33.507052+02:00 kernel: [11425.197971]  [<ffffffff810f4d45>] call_timer_fn+0x35/0x120

2017-10-05T17:54:33.507052+02:00 kernel: [11425.222648]  [<ffffffff810f59d6>] run_timer_softirq+0x1f6/0x4b0

2017-10-05T17:54:33.507053+02:00 kernel: [11425.249315]  [<ffffffff810fd8eb>] ? ktime_get+0x3b/0xb0

2017-10-05T17:54:33.507053+02:00 kernel: [11425.272507]  [<ffffffff81053006>] ? lapic_next_deadline+0x26/0x30

2017-10-05T17:54:33.507055+02:00 kernel: [11425.299879]  [<ffffffff817620a9>] __do_softirq+0xc9/0x26d

2017-10-05T17:54:33.507056+02:00 kernel: [11425.325941]  [<ffffffff81088929>] irq_exit+0xd9/0xf0

2017-10-05T17:54:33.507056+02:00 kernel: [11425.348778]  [<ffffffff81761ef2>] smp_apic_timer_interrupt+0x42/0x50

2017-10-05T17:54:33.507057+02:00 kernel: [11425.377940]  [<ffffffff817610ac>] apic_timer_interrupt+0x8c/0xa0

2017-10-05T17:54:33.507057+02:00 kernel: [11425.405474]  <EOI>

2017-10-05T17:54:33.507058+02:00 kernel: [11425.414086]  [<ffffffff8175ee61>] ? poll_idle+0x31/0x5d

2017-10-05T17:54:33.507060+02:00 kernel: [11425.437584]  [<ffffffff815d9b2d>] cpuidle_enter_state+0x9d/0x260

2017-10-05T17:54:33.507061+02:00 kernel: [11425.464852]  [<ffffffff815d9d27>] cpuidle_enter+0x17/0x20

2017-10-05T17:54:33.507061+02:00 kernel: [11425.489937]  [<ffffffff810c9ab3>] call_cpuidle+0x23/0x40

2017-10-05T17:54:33.507062+02:00 kernel: [11425.514326]  [<ffffffff810c9d29>] cpu_startup_entry+0x159/0x250

2017-10-05T17:54:33.507062+02:00 kernel: [11425.541202]  [<ffffffff81051a04>] start_secondary+0x154/0x190

2017-10-05T17:54:33.507063+02:00 kernel: [11425.567550] ---[ end trace 1afc42121276ab06 ]---

 

I need to mention that we have disabled lldp support in the kernel as we run lldpd daemon on our servers.

Do you know if the above issue and those errors are fixed in the latest firmware?

 

Cheers,

Pavlos


UDP packets frozen with at least I219-V, I219-LM and I217-LM

$
0
0

Hello,

 

I am new to this community. I subscribed to ask a question here because I am really stuck.

My company makes electronic devices that use UDP 100 MB Ethernet communication with a PC (under Windows 7, 8.1 or 10).

It works perfectly with other brands of Ethernet adapters; but there is a strange behavious with the Intel NICs we tried (at least I219-V, I219-LM and I217-LM).

 

Basically, our electronic devices can be considered as cameras capturing about 150 images per second.

We send a small command on one socket via UDP to tell it to capture an image, then we receive the compressed image as a set of UDP packets on another socket.

Each packet contains up to 1444 bytes of data (to which one can add the data from the different layers of protocols, which in the end does not exceed the standard packet size => no need to use Jumbo packets).

 

The problem is that, sometimes (this varies from as often as every 5 seconds to as rarely as only once within a 10-mn period), I am waiting forever (until the defined UDP time out) for the data to arrive while it has been sent (I can see it by sniffing the data from another computer connected to the same switch). I could believe that the UDP packet was lost by the Intel NIC, but it has not been lost. If I send a new image capture command, the packets that I was waiting finally arrive, followed by the packets of the new image.

Why are those packets stuck?

Is there any advanced parameter that I could modify from the device driver's configuration window or from a Registry key?

 

Note that I tried updating the Intel driver to recent versions (last one is 22.4.0.1). It seems to perform a little better that some other versions I tried (like the one installed by default in Windows), but none of the versions I tried work perfectly.

 

Many thanks in advance if you can help me understand what is wrong (either from me or from the driver). I have been struggling on this problem for months and we have to equip our customers who own a laptop with an Intel NIC with USB adapters with a NIC made by another brand to bypass the problem.

 

Karl

Question regarding RSS Profile as default on Driver(I350)

$
0
0

Installed Windows 2012 R2 : RSS Profile : NUMAStatic (Default)

After Installed NIC Driver(I350 4 Port) : RSS Profile : Closest

 

I want to know why Intel sets RSS Profiles to Closest.

Closest was default value on Windows 2008/R2.

If  you just don't change from Windows 2008/R2, it is fine. I can change.

But I need to know there is any reason.

Actually I need to change to NUMAStatic, but I have concern about to change RSS Profile, if there is any reason to set "Closest' as default.

I219-V 22.7.1 Ethernet Drivers not installing

$
0
0

This is tearing me apart mentally.

 

An issue arose yesterday where my Internet was permanently listed as 'Limited', I've tried everything to right it, I've uninstalled & reinstalled Drivers (Despite the question,) I've used command prompts to flush my IP & DNS. It turned out my issue was because my Drivers were outdated.

 

Now for the annoying part-

 

No matter which version of the Network Adapter Driver I install, it always reverts to version 12.15.22.6 from April last year, the installation instructions are pointless at this point as nothing changes, I need to be pointed in the right direction before I take my PC in to get looked at.

 

I just want to get my work done, it should not be this hard to fix such a minute problem.

 

The problem with my Ethernet started yesterday after I gave Administrator access to my Normal PC account, after that Windows Apps kept requesting an Internet Connection to work, it baffled me as I was currently on YouTube as I let it load, I looked to my bottom right corner to see that my Ethernet icon had disappeared, turns out my Internal IP & DNS had disappeared & my PC doesn't know what to do with itself as it keeps listing version 12.15.22.6 as the most recent version of the Driver (in the pic I provide below.)

 

Thanks in advance to those who help out.

Screenshot 2017-10-09 19.32.png

Wired Ethernet 82579 fails on boot since win10 update Sept17

$
0
0

Hi there.  Hope you can Help?

 

Intel 82579 Network adaptor fails to start on Boot since latest Win10 update (Sept 2017 - major re-build "Creators update?").

Wifi comes on, but I have to restart the PC for the ethernet to be found. Then it works fine.

 

I've updated, uninstalled and reinstalled drivers

(Latest "driver updates" from Intel were last week, installed ethernet driver shown as July 2016 :  12.15.31.4 )

 

I've turned off "energy efficiency" mode and also RSS following a recent thread
I've enabled legacy switch compatibility
I've set Linkspeed to forced

 

 

Mobo is Asus P8Z68-V running i7-2600k chip and 16gB RAM, - getting on but plenty of life left in this - don't plan to change kit for some time.
BIOS, chipset and other drivers all up to date.

 

I really don't want to have to Boot and then Re-boot my desktop every day for the next few years!

 

Any help much appreciated

 

 

Chris.

I219-LM: Wireshark can not see VLAN tag header and Ostinato can not send VLAN tagged frames

$
0
0

Hello,

 

I have Wireshark (v2.4.1) and Ostinato Network Traffic Generator (v0.6) installed on my laptop (Window 10 Pro version 1703, OS Build 15063.608) with Intel Ethernet Connection I219-LM. However I am not able to either receive or send VLAN tagged packets. Untagged packets are fine in both transmit and receiving directions.

 

I did not have such issue with my old laptop (Windows 7 Enterprise SP1, Intel 82579LM), I was able to use Wireshark to capture VLAN tagged frames and see the VLAN tag in the captured results, and I was able to send VLAN tagged frames using Ostinato or Linux tcpreplay.

 

I found a thread https://communities.intel.com/thread/22204, and added "MonitorMode"  registry key using regedit.

 

In my case

- Setting the MonitorMode value to 1 did not help. Wireshark still could not receive tagged frames. Ostinato could not transmit tagged frames.

- Setting the MonitorMode value to 2 helped a bit. Wireshark  could receive tagged frames, however the VLAN tag was stripped in the capture result. Ostinato could not transmit tagged frames.

 

My goals are 1) to see VLAN tags in the Wireshark capture result; 2) to use Ostinato or tcpreplay to send VLAN tagged frames.

 

System information -

System Model: Dell Latitude E7470

OS:           Windows 10 Pro 10.0.15063 N/A Build 15063

Driver:       Intel(R) Ethernet Connection I219-LM. Driver version: 12.13.17.7

I also installed the Intel® Network Adapter Driver from https://downloadcenter.intel.com/download/25016/Intel-Network-Adapter-Driver-for-Windows-10.

 

Any advice/suggestions are appreciated.

 

Regards,

Sam

Audio latency on my DAW "Intel(R) Ethernet Connection (2) I219-V" the culprit???

$
0
0

Hello, I recently purchased a new system (running Windows 10 Pro) for music production, whose specs I have just attached down here. I have audio glitches and playing cursor intermittency on my digital audio workstation of choice (Bitwig Studio 2.1.4), but I tested the problem is present even on Ableton Live 9.7.4. After doing some research on the internet I found out that the NIC might be the culprit. And for real as soon as I uninstall the Intel Ethernet Connection either via device manager or via BIOS the lag completely disappears even in complex projects and I noticed this happens even if I just switch the router off. The cursor intermittency can last for over 4 seconds, during the lag nothing on the screens is moving not even the mixer faders. Audio glitches occur more rarely. I tried to update every possible drivers and asked the pc manufacturer for some advice. They think it can't depend on software and that I should substitute the Intel NIC with another brand but I am not quite sure. Bitwig was running perfectly in my old system with full-fledged ethernet load and onto Windows 8.1...

Intel(R) Ethernet Connection (2) I219-V whats with the (2)?

$
0
0

I have an Asus Z170 Pro Gaming motherboard with the I219-V LAN port but even on a fresh Windows 7 or Windows 10 installation, the port always shows up as having the (2) prefix even though its the only LAN port available and its the first time drivers have been installed for it (for that OS installation after formatting the HDD). The driver version I'm currently using is 12.15.25.6 from earlier this year.

 

I'm pretty sure that at some stage I've seen it as simply Intel(R) Ethernet Connection I219-V and I'd like to get back to that but how? FWIW the hardware is working fine, I'm just fussy about my system configuration and would like this to be as originally intended.

I219-V.jpg


Re: Intel(R) Ethernet Connection (2) I219-V whats with the (2) or (3)

$
0
0

I am a desktop support tech for a college. We use WOL and I can tell you for a fact their is a bug in your adapters drivers. I set all of the settings on the original NIC settings located in device manager under the name "Intel(R) Ethernet Connection 1219-V" Recently I have been getting requests from end users about their systems, which were waking, no longer waking. When I show up sure enough the name of the NIC is changed to "Intel(R) Ethernet Connection (2) 1219-V" in device manager. When I look at the device properties sure enough all of the magic packet settings were not checked.

 

 

I corrected the issue by setting the property values to the correct settings, but will it magically create another maybe (3)??

Intel Network driver no longer support silent install (ver 22.0.1)

$
0
0

Our deployment system has been installing Intel network drivers for years using this approach:
Unzip "PROWinxXX ver XX.exe" and then running the appropriate DXSetup.exe (or setupBD for older versions) with the /qr argument.

 

This is in a way also working for 22.0.1... but after installation the "We invite you to join the Intel Product Improvement Program" popup appears, and deployment halts.
How do I suppress this popup during silent install?

 

Thanks,

Intel X710-DA4 / VMware ESXi 6.5u1 - Malicious Driver Detection Event Occured

$
0
0

Hello,

 

we're having problems with an Intel X710-DA4 retail card on VMware ESXi 6.5u1. After some time (usually minutes to hours) of sustained traffic on the NIC, we're seeing the following in vmkernel.log:

 

2017-08-11T12:26:02.554Z cpu18:66233)i40en: i40en_HandleMddEvent:6495: Malicious Driver Detection event 0x02 on TX queue 0 PF number 0x03 VF number 0x00

2017-08-11T12:26:02.554Z cpu18:66233)i40en: i40en_HandleMddEvent:6521: TX driver issue detected, PF reset issued

 

 

The network port in question is then apparently shut down, although the link stays up, and it does not pass any more network traffic. Only a reboot of the server will reset the network port and allow traffic to flow through it again.

The traffic pattern that leads to that issue usually is TCP traffic of >300MBit/s passing through a firewall virtual machine, entering on one virtual interface and exiting through another.

 

We are using ESXi 6.5u1 with the built-in i40en driver, as well as the latest NVM firmware version 5.05:

 

000:82:00.0 8086:1572 8086:0004 vmkernel vmnic2

0000:82:00.1 8086:1572 8086:0000 vmkernel vmnic3

0000:82:00.2 8086:1572 8086:0000 vmkernel vmnic4

0000:82:00.3 8086:1572 8086:0000 vmkernel vmnic5

 

esxcli network nic get -n vmnic3

   Advertised Auto Negotiation: false

   Advertised Link Modes: 10000BaseSR/Full

   Auto Negotiation: false

   Cable Type: FIBRE

   Current Message Level: -1

   Driver Info:

         Bus Info: 0000:82:00:1

         Driver: i40en

         Firmware Version: 5.05 0x80002898 1.1568.0

         Version: 1.3.1

   Link Detected: true

   Link Status: Up

   Name: vmnic3

   (...)

 

More details to curtail the problem:

  • We are not using SR-IOV.
  • The exact driver version is i40en 1.3.1-5vmw.650.1.26.5969303. We have observed the same issue with a previous driver version 1.3.1-1OEM.600.0.0.2768847.
  • The issue happens on multiple hosts, all with the same Intel X710-DA4 adapter.

 

VMware Support has not been able to resolve the issue for us, saying they have been observing issues with all current X710 drivers and cannot point us into any specific direction - other than asking us to turn to Intel for support.

Honestly, at this point we're at our wits end and do not know how to proceed any further - other than switching to a different manufacturer's network hardware altogether.

 

Thank you for any helpful advice.

"IPV6 header not found" in syslog for QinQ ICMP6 packets

$
0
0

Hi all,

 

I am facing some -I think- strange problem with IPV6 and QinQ in a Linux host, and may be someone has faced similar problem (or could provide a hint).

 

I have some VMs running in a host (KVM), and every time any VM sends an ICMP6 Router Advertisement, we get the following log in syslog:

 

Aug 10 11:18:36 Hostname kernel: [1722430.045240] IPv6 header not found

 

For the traffic, I use QinQ (802.1Q in both tags), the inner tag is set with OVS in the tap(or bridge, no difference), and the outer is set with a veth of vlan type, in the following way:

 

 

+-------------------------------------------------------+

|                   +-----------+               HOST    |

|                   |           |         Unbuntu 16.04 |

|                   |    VM-1   |         4.4.0-62-generic

|                   |           |                       |

|                   +----+------+                       |

|                        |                              |

|                        |TAG=1                         |

|              +-----------------------+                |

|              |       OVS             |                |

|              +---------+-------------+                |

|                        | veth1.203                    |

|                        |                              |

|                        |                              |

|                        +veth0                         |

|              +---------------------+                  |

|              |      Bridge         |                  |

|              +-------+-------------+                  |

|                      |                                |

|                      |                                |

|                   +--+----+                           |

|                   |ens11f1|                           |

+-------------------------------------------------------+

 

 

'Regular traffic' (non ICMP6) seems to work fine, the problem happens apparently only with the Router Advertisement or Neighbour Discovery.

 

I checked the code writing that log, and I think it's in 'kernel/net/ipv6/exthdrs_core.c'

 

 

if (*offset) {

  struct ipv6hdr _ip6, *ip6;

 

 

  ip6 = skb_header_pointer(skb, *offset, sizeof(_ip6), &_ip6);

  if (!ip6 || (ip6->version != 6)) {

    printk(KERN_ERR "IPv6 header not found\n");

    return -EBADMSG;

  }

  start = *offset + sizeof(struct ipv6hdr);

  nexthdr = ip6->nexthdr;

}

 

 

but both the protocol and protocol version seem right in tcpdump:

 

 

11:28:38.675686 02:00:40:00:21:31 > 33:33:00:00:00:01, ethertype 802.1Q (0x8100), length 158: vlan 203, p 0, ethertype 802.1Q, vlan 49, p 0, ethertype IPv6, fe80::40ff:fe00:2131 > ff02::1: ICMP6, router advertisement, length 96

`....`:...........@...!1................... @...............@.!1..........@.... ........*.. ..!1................*.. ..!1............. '.

11:28:39.300076 02:00:40:00:23:2a > 33:33:00:00:00:01, ethertype 802.1Q (0x8100), length 158: vlan 204, p 0, ethertype 802.1Q, vlan 193, p 0, ethertype IPv6, fe80::40ff:fe00:232a > ff02::1: ICMP6, router advertisement, length 96

`....`:...........@...#*...................%@...............@.#*..........@.... ........*.. ..#*................*.. ..#*.............

 

 

I already disabled (just in case) all the offloading features in the ethernet card (Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)).

 

sysadmin@olnmpep02318n002:~/andres$ ethtool -k enp2s0f1

Features for enp2s0f1:

rx-checksumming: off

tx-checksumming: off

  tx-checksum-ipv4: off

  tx-checksum-ip-generic: off [fixed]

  tx-checksum-ipv6: off

  tx-checksum-fcoe-crc: off [fixed]

  tx-checksum-sctp: off

scatter-gather: off

  tx-scatter-gather: off

  tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

  tx-tcp-segmentation: off

  tx-tcp-ecn-segmentation: off

  tx-tcp6-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off [requested on]

generic-receive-offload: on

large-receive-offload: off [fixed]

rx-vlan-offload: off

tx-vlan-offload: off

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: off

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

 

 

Any help/hint is trully appreciated!

 

 

Regards,

Andrés

Unable to Remove Intel NIC Teaming

$
0
0

I currently have two Intel NICs teamed as Team #0.

I'm trying to break the team to use each adapter independently, however, when I try to configure the team via "Intel(R) Advanced Network Services Virtual Adapter" I get a vague error, "GetTeamInfo failed" and the "Settings" tab is empty.

 

 

I've tried running the ProSetCl command line tool and it tells me there is no Teams installed.

 

I've also tried uninstalling the "Intel(R) Advanced Network Services Virtual Adapter" adapter and get a different error message "Teams cannot be removed if a virtual NIC is bound to the team. Remove the virtual NIC before deleting the team." As far as I can tell there is no virtual NIC installed on my machine.

 

How can I force the removal of the Team #0 adapter so that I can use the individual NICs separately? I appreciate any help/guidance you can provide.

 

Thanks

Ethan

Problems with Intel XL710, SR-IOV and Openstack

$
0
0

We are attempting to use XL710 together with Openstack, and acceleration using SR-IOV.

 

With the base drivers provided by Ubuntu 14, VM's can be created but during boot cloud-init fails to create any interfaces with the following error message:

 

[    7.562357] cloud-init[395]:     ret = functor(name, args)

[    7.563806] cloud-init[395]:   File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 364, in main_init

[    7.568756] cloud-init[395]:     init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))

[    7.626863] cloud-init[395]:   File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 635, in apply_network_config

[    7.627928] cloud-init[395]:     netcfg, src = self._find_networking_config()

[    7.628708] cloud-init[395]:   File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 622, in _find_networking_config

[    7.632059] cloud-init[395]:     if self.datasource and hasattr(self.datasource, 'network_config'):

[    7.636062] cloud-init[395]:   File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", line 150, in network_config

[    7.644072] cloud-init[395]:     self.network_json, known_macs=self.known_macs)

[    7.705553] cloud-init[395]:   File "/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 652, in convert_net_json

[    7.706582] cloud-init[395]:     raise ValueError("Unable to find a system nic for %s" % d)

[    7.707395] cloud-init[395]: ValueError: Unable to find a system nic for {'mtu': 1500, 'type': 'physical', 'mac_address': 'fa:16:3e:c8:81:d9', 'subnets': [{'routes': [], 'type': 'static', 'address': '10.0.20.6', 'netmask': '255.255.255.0', 'ipv4': True}]}

 

When installing the latest drivers available from Intel, VM's can not be created at all. Looking at the OpenStack logs, this error message can be found: "internal error: couldn't find IFLA_VF_INFO for VF 7 in netlink response". Searching for this error on google it seems like there could be problems when creating more than 30 VF's in some older versions of libvirt, but we are only testing with 8 VF's right now.

 

We are running Ubuntu 14 and Openstack Mitaka. We have tried a later Ubuntu/Openstack version (Ubuntu 16 and OpenStack Newton) but there were problems there too -- unsure if it was the same issue, since the logs are long gone.

 

We have the latest drivers and firmware updates installed. We do -not- have the i40evf driver installed since that would conflict with the VM claiming VF's according to this document https://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/xl710-sr-iov-config-guide-gbe-linux-brief.pdf .

When configuring SR-IOV we are following the guide on https://docs.openstack.org/mitaka/networking-guide/config-sriov.html

 

We have some other Intel cards (I can't remember the model) which this works fine for.

Intermittency on playing cursor and some audio DPC latency issues

$
0
0

Hello! I have intermittency of playing cursor on BItwig Studio 2.1.4 and Bitwig 2.2. and Ableton Live 9.7.4 even if it's way less. Sometimes there are little events of audio latency and glitches, but only sometimes. Whenever I disable Intel Network Adapter everything is perfectly smooth. What can I do? Now I am forced to deactivate the internet to work properly. I attach the system infos. The system is brand new and all of the drivers are updated. Thanks in advance!


Audio latency on my DAW "Intel(R) Ethernet Connection (2) I219-V" the culprit???

$
0
0

Hello, I recently purchased a new system (running Windows 10 Pro) for music production, whose specs I have just attached down here. I have audio glitches and playing cursor intermittency on my digital audio workstation of choice (Bitwig Studio 2.1.4), but I tested the problem is present even on Ableton Live 9.7.4. After doing some research on the internet I found out that the NIC might be the culprit. And for real as soon as I uninstall the Intel Ethernet Connection either via device manager or via BIOS the lag completely disappears even in complex projects and I noticed this happens even if I just switch the router off. The cursor intermittency can last for over 4 seconds, during the lag nothing on the screens is moving not even the mixer faders. Audio glitches occur more rarely. I tried to update every possible drivers and asked the pc manufacturer for some advice. They think it can't depend on software and that I should substitute the Intel NIC with another brand but I am not quite sure. Bitwig was running perfectly in my old system with full-fledged ethernet load and onto Windows 8.1...

Network Card Driver update

$
0
0

My query is - I am running Win10 (upgraded from Win7) and my network card driver is v 12.10.29.0 

However Intel website says the latest driver is 22.4.0.1. 

I run MS Update automatically and it is not highlighting this as a problem.

 

System gives me this info

WIRED NETWORKING INFORMATION

Wired Networking Product Intel(R)82579 V based NetworkController (OEM)

Driver Version 12.10.29.0

 

The problem I have is that often (about 25% of the time) when I switch the PC on, it does not make a connection to the Router (other WiFi devices can connect through Router so I don't suspect a router problem).  If I do a Restart the connection is made OK.

 

Should I force a driver update?  And if yes, then how?  Or should I just live with it??

 

This is all a little beyond my technical sphere, so I am hoping for some advice?

Genuine Intel X550 but Yottamark MAC wrong?

$
0
0

Hello,

 

I recently purchased an Intel X550-T2 Network Adapter from a large online vendor based in City of Industry, CA.  The network card was sold by the online vendor themself, not a marketplace vendor.

 

I know that counterfeit cards are a real problem and I want to do the utmost to make certain my card is genuine.  I was excited to find the YottaMark on the back of the card.  I loaded up verify.yottamark.com, put in my code, and the code validates as valid.  It also points to the correct product type, correct manufacturing date, correct product name, and correct PBA number.  Unfortunately, the MAC address on the YottaMark page and the MAC address printed on the box/card label differ.

 

Should I consider the card counterfeit and return it?  I assume the MAC addresses should match between the actual card and YottaMark or is there some known issue with this?

 

Other than the MAC mismatch, the card looks very good.  The PulseJacks look laser etched, etc.

 

What is the best way to validate I have a genuine card?  How big of a problem is the mis-matched MAC address?

 

Thanks,

 

Barry

Internet keeps dropping, Win10

$
0
0

Hi,

 

My internet keeps dropping every 10 minutes or so, it's a new computer with an ASRock B250M Pro4 motherboard, and a wired ethernet connection. My phone does not have any problems with the same router, neither did my previous computer.

 

I've read a bunch of solutions around the web, the main one is to go to Device Manager -> the network adapter -> Properties -> Power Management, and then turn off the "Allow the computer to save power" option.

 

This option wasn't there, and my settings looked something like this http://i.imgur.com/PBgPCSs.png which I assume is because the driver is newer than when that solution was proposed. I turned all the options in this dialog off, and it made no difference. I

 

I then installed several different versions of the drivers on the intel site, some with the "save power" option, and none of them stopped the issue from occurring.

 

What are my next steps?

Re: X710/X557-AT Malicious Driver Detection Event 2 on TX q-FreeBsd 11.1

$
0
0

Hi Sharon,

 

I have the same problem.

1. FreeBSD 11.1-RELEASE

2. ixl0: <Intel(R) Ethernet Connection XL710/X722 Driver, Version - 1.7.12-k>

ixl0: fw 5.0.40043 api 1.5 nvm 5.04 etid 80002505 oem 0.0.0

3. this is iscsi target server. If traffic with high load is flowing on the server, error log will occur frequently.

ixl0: WARNING: queue 2 appears to be hung!

ixl0: WARNING: Resetting!

ixl0: Malicious Driver Detection event 2 on TX queue 6, pf number 0

ixl0: MDD TX event is for this function!

 

It does not solve even driver version 1.7

 

Regards,

Takunii

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>