Need help - I'm a Linux n00b and two of my network adapters don't work

dcuccia

Cadet
Joined
Feb 12, 2022
Messages
7
Hi! So this week I set up Scale 22.02-RC.2 on my 8th-gen Intel NUC. The internal NIC works fine (Intel I219-V chipset, 1GbE), as does an external UGREEN CM275 USB-C adapter (Realtek RTL8153B chipset, also 1GbE). However, neither of the following upgrades seem to work out of the box:

Sabrent TH-S3EA USB-C (Aquantia AQC-107 chipset, 10GbE, Linux drivers here)
Sabrent NT-S25G USB-C (Realtek RTL8156 chipset, 2.5GbE, Linux drivers here)

Neither are recognized as network adapters, at least in the UI Network dashboard tab.

According to STH, the NT-S25G should have drivers built into Kernel 5.x and greater, which is confusing.

I tried to follow along with the manufacturer's instructions to build drivers, such as the attached README.txt. The sudo make install commands complete, but with errors. I don't know too much Linux, so I'm not sure if I'm barking up the wrong tree. Any help (e.g. why not recognized automatically, or how to install drivers) would be greatly appreciated, thanks!
 

Attachments

  • README.txt
    23.7 KB · Views: 602

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
usb is considered to be flaky, at best, by the truenas community and developers, particularly for a server
realtek is known to be flaky, at best, for server usage. hell, it's flaky for workstation usage.
truenas scale, like truenas core, is a storage appliance, and appliances often do not have the full drive compliment of a user oriented installation, such as drivers for things a server deployer would consider silly, things like usb nics, usb drives, realtek anything, audio drivers, gaming drivers, etc.
intel nic's are considered some of the most stable, but you'll notice there aren't any usb options with intel controllers....for a reason.
 
Joined
Jun 2, 2019
Messages
591
According to this post, CORE 12.0-U6 supports I219-V


Have you tried the SCALE Nightlies?
 
Last edited:

dcuccia

Cadet
Joined
Feb 12, 2022
Messages
7
@elvisimprsntr thanks - no, but per my post, the included Intel NIC is working fine.

@artlessknave appreciate the feedback, but flakiness isn't my concern at the moment, it's getting a working driver (and presumably per my links these devices/chipsets are supported in Linux generally).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
supported in Linux generally
I think you're assuming Linux is somehow like Windows... it isn't.

Just because standard compiled distributions (like Debian or Ubuntu) have certain drivers baked into the kernel means nothing about a custom compiled distribution based on them (where the compiling party selects how to modify it before compiling, excluding some things and maybe including others not present in the standard one).

TrueNAS SCALE is a custom compile, so there are no guarantees of wide hardware support equal to the standard distribution, even if debian in general has wider support than FreeBSD (and TrueNAS CORE... also having only a subset of supported drivers compared to the starndard distribution of FreeBSD), so may sometimes mean more hardware will work.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Any help (e.g. why not recognized automatically, or how to install drivers) would be greatly appreciated, thanks!

In general, FreeNAS and TrueNAS should always be run on hardware that has been vetted and recommended by the community. As an appliance, these operating systems do not support the addition of drivers (even if you might be able to mallet them in sometimes). The community maintains extensive resources devoted to identifying good hardware for use, and for networking it is suggested you follow recommendations listed in


the 10 Gig Networking Primer. Networking in particular is a topic where NAS places extreme demands on the network interfaces, and even cards for which drivers exist in the appliance may not work particularly well.

Drivers for new cards often make it into TrueNAS somewhere between three months and a year after appearing in the upstream operating system releases.
 

dcuccia

Cadet
Joined
Feb 12, 2022
Messages
7
I appreciate the feedback @sretalla and @jgreco

Bummer. One of the attractive aspects of Scale to me was the fact that it was based on Debian, and that I might be able to tinker with drivers to get a setup going with hardware I already own (in this case, an Intel NUC-based PC). This is for a "learning-mode" home lab setup, nothing that needs to be production quality.

Reading the community guidance, I don't see any external adapters recommended. One option is to get a Thunderbolt 3 to PCIe external adapter so that I can run a recommended 10G SFP+ NIC. If anyone here has recommendations on those, I'd love to hear it. This would essentially be the same as an eGPU setup - if anyone here has successfully deployed one with Scale (or vanilla Debian), please let me know.

Thanks!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I would recommend using this as your demo system to decide if you want to build a recommended purpose built system or use a different OS.
openmediavault is true debian, and far more likely to have the drivers. and i believe you can install any package and software as it is mainline Debian, not applianced.

unRAID, as well, might have the drivers, though it has a licensing setup that isn't my favorite.
alternatively, a pure linux system can be done, though fancy webUI's are more difficult to come by.
you can also share with proxmox I believe.
all of these options have ZFS available, it's just that TrueNAS ONLY has ZFS.
 

dcuccia

Cadet
Joined
Feb 12, 2022
Messages
7
I would recommend using this as your demo system to decide if you want to build a recommended purpose built system or use a different OS.
Thanks - that's what I'm doing here - experimenting on a home setup in order to learn the ropes and make a potential purpose-built system at my day job. One of the things I really like about Scale is its ability to work with containers and apps. In particular, I would like to host Plex and Home Assistant apps with my home machine. I appreciate that things are "applianced" here, and that options are purposefully limited.

In order to embrace the spirit of using recommended hardware, I purchased a Intel X540 NIC recommended in the 10G primer, and a StarTech external PCI-e chassis. When I run lspci, I see the following new items:
02:00.0 PCI bridge: Intel Corporation JHL6340 Thunderbolt 3 Bridge (C step) [Alpine Ridge 2C 2016] (rev 02)
03:00.0 PCI bridge: Intel Corporation JHL6340 Thunderbolt 3 Bridge (C step) [Alpine Ridge 2C 2016] (rev 02)
03:01.0 PCI bridge: Intel Corporation JHL6340 Thunderbolt 3 Bridge (C step) [Alpine Ridge 2C 2016] (rev 02)
03:02.0 PCI bridge: Intel Corporation JHL6340 Thunderbolt 3 Bridge (C step) [Alpine Ridge 2C 2016] (rev 02)
04:00.0 System peripheral: Intel Corporation JHL6340 Thunderbolt 3 NHI (C step) [Alpine Ridge 2C 2016] (rev 02)
3a:00.0 USB controller: Intel Corporation JHL6340 Thunderbolt 3 USB 3.1 Controller (C step) [Alpine Ridge 2C 2016] (rev 02)

So...the enclosure is seen/mounted fine, but the Intel adapter itself is not. I then thought "oh, that's fine, there are drivers here, let me just install those per the manufacturer's instructions." But, then I noticed in the weeks since I got started with Scale, the latest Release won't let me install anything anymore (apt has been removed entirely), so I can't follow along here. D'oh!

I'm pretty sure I have a genuine Yottamark X540-T1 NIC, and I've verified the card works in two Windows systems (after driver install...).

Not sure if this is helpful, but "/sbin/modinfo ixgbe" yields:

filename: /lib/modules/5.10.93+truenas/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
license: GPL v2
description: Intel(R) 10 Gigabit PCI Express Network Driver
author: Intel Corporation, <linux.nics@intel.com>
alias: pci:v00008086d000015E5sv*sd*bc*sc*i*
alias: pci:v00008086d000015E4sv*sd*bc*sc*i*
alias: pci:v00008086d000015CEsv*sd*bc*sc*i*
alias: pci:v00008086d000015C8sv*sd*bc*sc*i*
alias: pci:v00008086d000015C7sv*sd*bc*sc*i*
alias: pci:v00008086d000015C6sv*sd*bc*sc*i*
alias: pci:v00008086d000015C4sv*sd*bc*sc*i*
alias: pci:v00008086d000015C3sv*sd*bc*sc*i*
alias: pci:v00008086d000015C2sv*sd*bc*sc*i*
alias: pci:v00008086d000015AEsv*sd*bc*sc*i*
alias: pci:v00008086d000015ACsv*sd*bc*sc*i*
alias: pci:v00008086d000015ADsv*sd*bc*sc*i*
alias: pci:v00008086d000015ABsv*sd*bc*sc*i*
alias: pci:v00008086d000015B0sv*sd*bc*sc*i*
alias: pci:v00008086d000015AAsv*sd*bc*sc*i*
alias: pci:v00008086d000015D1sv*sd*bc*sc*i*
alias: pci:v00008086d00001563sv*sd*bc*sc*i*
alias: pci:v00008086d00001560sv*sd*bc*sc*i*
alias: pci:v00008086d0000154Asv*sd*bc*sc*i*
alias: pci:v00008086d00001557sv*sd*bc*sc*i*
alias: pci:v00008086d00001558sv*sd*bc*sc*i*
alias: pci:v00008086d0000154Fsv*sd*bc*sc*i*
alias: pci:v00008086d0000154Dsv*sd*bc*sc*i*
alias: pci:v00008086d00001528sv*sd*bc*sc*i*
alias: pci:v00008086d000010F8sv*sd*bc*sc*i*
alias: pci:v00008086d0000151Csv*sd*bc*sc*i*
alias: pci:v00008086d00001529sv*sd*bc*sc*i*
alias: pci:v00008086d0000152Asv*sd*bc*sc*i*
alias: pci:v00008086d000010F9sv*sd*bc*sc*i*
alias: pci:v00008086d00001514sv*sd*bc*sc*i*
alias: pci:v00008086d00001507sv*sd*bc*sc*i*
alias: pci:v00008086d000010FBsv*sd*bc*sc*i*
alias: pci:v00008086d00001517sv*sd*bc*sc*i*
alias: pci:v00008086d000010FCsv*sd*bc*sc*i*
alias: pci:v00008086d000010F7sv*sd*bc*sc*i*
alias: pci:v00008086d00001508sv*sd*bc*sc*i*
alias: pci:v00008086d000010DBsv*sd*bc*sc*i*
alias: pci:v00008086d000010F4sv*sd*bc*sc*i*
alias: pci:v00008086d000010E1sv*sd*bc*sc*i*
alias: pci:v00008086d000010F1sv*sd*bc*sc*i*
alias: pci:v00008086d000010ECsv*sd*bc*sc*i*
alias: pci:v00008086d000010DDsv*sd*bc*sc*i*
alias: pci:v00008086d0000150Bsv*sd*bc*sc*i*
alias: pci:v00008086d000010C8sv*sd*bc*sc*i*
alias: pci:v00008086d000010C7sv*sd*bc*sc*i*
alias: pci:v00008086d000010C6sv*sd*bc*sc*i*
alias: pci:v00008086d000010B6sv*sd*bc*sc*i*
depends: mdio,libphy,mdio_devres,ptp,dca,xfrm_algo
retpoline: Y
intree: Y
name: ixgbe
vermagic: 5.10.93+truenas SMP mod_unload modversions
parm: max_vfs:Maximum number of virtual functions to allocate per physical function - default is zero and maximum value is 63. (Deprecated) (uint)
parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599-based adapters (uint)
parm: debug:Debug level (0=none,...,16=all) (int)

ixgbe 5.10 was released in Dec. 2020, but the X540-T1 is a 10-year old card, so I'm not sure how to check compatibility, and/or how to scan for devices that are present but not yet installed properly. I'm again out of my depth - any helpful advice on how to break through this (that doesn't involve giving up/buying a new PC) would be appreciated.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
lspci doesn't care if drivers are loaded or not... you won't magically see devices appear just by loading the drivers for them if they aren't on the PCI bus.

If you've seated the card in the thunderbolt PCIe adapter, you'll need the driver for that before it will get you to seeing the PCIe card in it.
 

dcuccia

Cadet
Joined
Feb 12, 2022
Messages
7
lspci doesn't care if drivers are loaded or not... you won't magically see devices appear just by loading the drivers for them if they aren't on the PCI bus.

If you've seated the card in the thunderbolt PCIe adapter, you'll need the driver for that before it will get you to seeing the PCIe card in it.
I see, thanks. How can I tell if a driver is loaded for the JHL6340 bridge? I'm deducing that perhaps it's not, based on your explanation.
 
Top