dealy663
Dabbler
- Joined
- Dec 4, 2021
- Messages
- 32
Hi
I've been running this x520 10Gbe nic for about a year now. It was originally in PCI slot 1 (highest speed PCI4 x16. Recently I installed a higher end GPU and put that into slot 1. While experimenting with PCI pasthrough for the GPU I had to move the 10Gbe nic don't to PCI slot 3 which on this mobo only allows link at 5Gb/s. This was fine but now that everything is sorted out I had planned to move the NIC to PCI slot 2 which should give it enough bandwidth for 10Gb/s again. However TrueNAS always seems to bring the NIC down when it is in this slot. I can't figure out what the problem is. It assigns an IP address and the console says that the web UI is at the IP addr assigned to the NIC. But there is no network available, nothing shows up for it with ip link. Here I've captured the output of dmesg | grep ixgbe when the NIC is working properly in slot 3:
root@TrueNAS[~]# dmesg | grep ixgbe
[ 8.242251] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 8.248427] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 8.254280] ixgbe 0000:04:00.0: enabling device (0000 -> 0002)
[ 8.473352] ixgbe 0000:04:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16 XDP Queue count = 0
[ 8.486830] ixgbe 0000:04:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0000:03:00.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[ 8.510095] ixgbe 0000:04:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: FFFFFF-0FF
[ 8.524346] ixgbe 0000:04:00.0: 80:61:5f:0c:de:59
[ 8.535208] ixgbe 0000:04:00.0: Intel(R) 10 Gigabit Network Connection
[ 8.559777] ixgbe 0000:04:00.0 enp4s0: renamed from eth0
[ 38.822805] ixgbe 0000:04:00.0: registered PHC device on enp4s0
[ 39.007253] ixgbe 0000:04:00.0 enp4s0: detected SFP+: 3
[ 39.147307] ixgbe 0000:04:00.0 enp4s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
And then here is the same output when it brings the NIC down when the NIC is in slot2:
[ 7.044422] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 7.050551] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 7.064593] ixgbe 0000:0d:00.0: enabling device (0000 -> 0002)
[ 7.251532] ixgbe 0000:0d:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16 XDP Queue count = 0
[ 7.251835] ixgbe 0000:0d:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 7.251916] ixgbe 0000:0d:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: FFFFFF-0FF
[ 7.251917] ixgbe 0000:0d:00.0: 80:61:5f:0c:de:59
[ 7.253003] ixgbe 0000:0d:00.0: Intel(R) 10 Gigabit Network Connection
[ 7.304200] ixgbe 0000:0d:00.0 enp13s0: renamed from eth1
[ 37.789591] ixgbe 0000:0d:00.0: registered PHC device on enp13s0
[ 37.974168] ixgbe 0000:0d:00.0 enp13s0: detected SFP+: 3
[ 38.118202] ixgbe 0000:0d:00.0 enp13s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 65.251730] ixgbe 0000:0d:00.0: removed PHC on enp13s0
[ 65.362349] ixgbe 0000:0d:00.0: complete
In the second log sample you can see the higher througput of the PCI2 slot and everyting looks good until it says: removed PHC on enp13s0. Everything looks fine from the console network configuration screens.
Any ideas or suggestions on how to further troubleshoot this?
Thanks, Derek
I've been running this x520 10Gbe nic for about a year now. It was originally in PCI slot 1 (highest speed PCI4 x16. Recently I installed a higher end GPU and put that into slot 1. While experimenting with PCI pasthrough for the GPU I had to move the 10Gbe nic don't to PCI slot 3 which on this mobo only allows link at 5Gb/s. This was fine but now that everything is sorted out I had planned to move the NIC to PCI slot 2 which should give it enough bandwidth for 10Gb/s again. However TrueNAS always seems to bring the NIC down when it is in this slot. I can't figure out what the problem is. It assigns an IP address and the console says that the web UI is at the IP addr assigned to the NIC. But there is no network available, nothing shows up for it with ip link. Here I've captured the output of dmesg | grep ixgbe when the NIC is working properly in slot 3:
root@TrueNAS[~]# dmesg | grep ixgbe
[ 8.242251] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 8.248427] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 8.254280] ixgbe 0000:04:00.0: enabling device (0000 -> 0002)
[ 8.473352] ixgbe 0000:04:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16 XDP Queue count = 0
[ 8.486830] ixgbe 0000:04:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0000:03:00.0 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[ 8.510095] ixgbe 0000:04:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: FFFFFF-0FF
[ 8.524346] ixgbe 0000:04:00.0: 80:61:5f:0c:de:59
[ 8.535208] ixgbe 0000:04:00.0: Intel(R) 10 Gigabit Network Connection
[ 8.559777] ixgbe 0000:04:00.0 enp4s0: renamed from eth0
[ 38.822805] ixgbe 0000:04:00.0: registered PHC device on enp4s0
[ 39.007253] ixgbe 0000:04:00.0 enp4s0: detected SFP+: 3
[ 39.147307] ixgbe 0000:04:00.0 enp4s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
And then here is the same output when it brings the NIC down when the NIC is in slot2:
[ 7.044422] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 7.050551] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 7.064593] ixgbe 0000:0d:00.0: enabling device (0000 -> 0002)
[ 7.251532] ixgbe 0000:0d:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16 XDP Queue count = 0
[ 7.251835] ixgbe 0000:0d:00.0: 32.000 Gb/s available PCIe bandwidth (5.0 GT/s PCIe x8 link)
[ 7.251916] ixgbe 0000:0d:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: FFFFFF-0FF
[ 7.251917] ixgbe 0000:0d:00.0: 80:61:5f:0c:de:59
[ 7.253003] ixgbe 0000:0d:00.0: Intel(R) 10 Gigabit Network Connection
[ 7.304200] ixgbe 0000:0d:00.0 enp13s0: renamed from eth1
[ 37.789591] ixgbe 0000:0d:00.0: registered PHC device on enp13s0
[ 37.974168] ixgbe 0000:0d:00.0 enp13s0: detected SFP+: 3
[ 38.118202] ixgbe 0000:0d:00.0 enp13s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 65.251730] ixgbe 0000:0d:00.0: removed PHC on enp13s0
[ 65.362349] ixgbe 0000:0d:00.0: complete
In the second log sample you can see the higher througput of the PCI2 slot and everyting looks good until it says: removed PHC on enp13s0. Everything looks fine from the console network configuration screens.
Any ideas or suggestions on how to further troubleshoot this?
Thanks, Derek