First TrueNAS build, trying to not get myself into trouble with lacking knowledge

ConKbot

Cadet
Joined
Oct 16, 2021
Messages
7
Ive been scouring the forums here, and other places for a while now, while planning a hardware for a NAS build, and I'm wrapping up the hardware build, which means moving onto the install. Not being experienced in Linux/BSD or enterprise hardware space there is a lot to take in, and there is quite a lot of advice spanning many hardware generations that may or may not still be valid. So before I start putting data on the server, I'm going to put out what I 'know' so I can be told why I'm wrong. (certainly not a dig against the community here, you all seem extremely helpful during all my reading previously. Just how the internet works, easier to tell me whats wrong than to tell me everything I don't know.)

First, hardware:
AsrockRack EPC621D6U-2T16R
Xeon 4210R
4x 32GB ECC Micron MTA36ASF4G72PZ-2G9J3 I couldn't find stuff on the QVL besides newegg third parties (which I generally regard as all scams), Amazon (again... scams) and suspiciously cheap ebay listings for 'new' memory. I wouldn't have had a problem with used memory from some server decommissioning somewhere, but 'new', cheap, and direct ship from china screams "I got a label maker and can label random sticks as whatever I want them to be" to me.
Silverstone SC381 Case and backplane/bays Somewhat infamous for drive cooling being a bit lacking. I designed and made a bracket/duct to mount between the drive bays with 3x blower fans with high static pressure to keep air moving over the drives
Startech 4x M.2 adapter to get SATA lanes not going to the backplanes broken out
Micro SATA Cables Slimline SAS 8x to M.2 Adapter breaking out the SFF-8654 connector on the motherboard to 2x M.2 Slots for NVME SSDs (currently unused)
For a UPS, planning on picking something supported by Network UPS Tools because TrueNAS can be configured to shut down gracefully when a power outage happens.

Drives:
6x Toshiba MG08, 14TB, SATA 512e drives MG08ACA14TA for the storage zpool
3x WD Blue 1TB SATA M.2 SSD consumer (non-PLP) SSDs for a second zpool
1x WD Blue 250gb sata M.2 SSD consumer SSD for 1/2 boot zpool mirror
1x WD Blue 250gb SN570 NVME M.2 SSD consumer SSD for other half of the boot zpool mirror

Usage:
Home use, single user. SMB shares, torrent/openVPN in a jail, 'personal cloud' for android phone(open to suggestions on setups for this that are the 'best' currently), backing up windows PCs.
In the future: Perhaps more VM related workload, as I certainly dont intend to use win11 for my main desktop, and will be moving to some flavor of Linux, but may need a VM for certain windows applications. If I decide to use PLEX instead of just browsing a file share like a heathen, Ive kept space free to add a GPU for transcoding.

So first off, since it sounds like this isn't something I can change after setup. Setting Ashift/sector size, I should be going with 4kB/Ashift=12 on the hard drives, and either 12/4kb or 13/8kb for the SSDs (details are lacking from the manufacturer). Is there any issues or settings to keep in mind for alignment of emulated sectors on drives vs physical sectors on drives, vs ZFS sectors? I.e. preventing write amplifications by a 4kb ZFS write from spanning across two physical sectors on the 512e drives. Just use ashift=13 for everything? I don't foresee a lot of sub 8KB files being stored.

Second, compression. Sounds like I should just enable LZ4 and don't think too much on it. I have enough CPU and memory to accommodate it, so next to no downsides. A majority of the files (space wise) wont be compressible, but it will help with those that are and save space on files that are below a single sector size?

For the actual arrangement of the drives. The boot volume will be the 250GB drives mirrored. Bigger than it needs, but its not like 120GB drives are available from non-questionable brands. Setup the mirror at the time of install by selecting both drives while installing?

The 6x HDDs will be in a single Z2 vdev in their own zpool, nothing too much to think on there.

The 3x 1TB SSDs will be in a single Z1 vdev, in their own zpool. Used for torrent files in-progress to keep the HDD zpool from getting horrifically fragmented. Files moved to the HDD zpool upon completion. Also for a fast network share. I should put the network share in a separate dataset from the torrents, so it can be incrementally backed up (rsync? something else?) to the HDD Zpool also.

No L2ARC, I'm not accessing a large set of small/medium files, so it wouldn't be of benefit to me. No SLOG, as I don't have anything that is going to be synchronous writes. If I end up running a VM in the future (either on the server or remote PC with the VM stored on the server) this may bring more sync writes into the picture, and perhaps benefit from a SLOG. If I do add a SLOG, it will be on mirrored NVME SSDs. Power Loss Protection wouldn't be strictly mandatory on either of these as the loss of either the L2ARC or SLOG doesnt present a lethal threat to the zpool, and only data-in-transit in the SLOG is at risk if the SSD lies about sync writes being complete, while actually still in cache. Unfortunately some do, so a PLP drive would be highly preferred in this application.

For fan control it sounds like AsrockRack isnt the best supported, but this thread has some tools for AsrockRack boards, and I'll have to sort though that further to make sense of it, see what is supported by this BMC/IPMI

Thats the setup/configuration stuff. For the operation/maintenance, either 1 or 2x per month scrubs, 1-2x per month SMART long tests, and weekly SMART short tests.
Fragmentation cant really be fixed with data in place at the moment, if it comes down to it, moving all the data off, nuking the snapshots, and moving all the data back will fix fragmentation, but is obviously rather invasive.

With the unused SAS channels I have, when time comes to upgrade. Would it be possible or advisable to do the following?
1)Add an external SAS chassis with my new drives in it
2)make a new vdev/zpool,
3)move all the automatic events to point at the new zpool, move the data to the new zpool,
4)decommission the old zpool, remove the drives
5)move the new drives into the server.
6)put the SAS chassis back away into the closet
That would simultaneously upgrade my storage without having to resilver 6x, while also removing fragmentation in the data that would remain if I upgraded the drives 1x at a time. At the expense of losing all snapshots, so I had better be sure.

Thanks for even reading my wall of text here.
 

ConKbot

Cadet
Joined
Oct 16, 2021
Messages
7
Been slowly getting stuff setup in my free time, everything has been progressing smoothly.

I can confirm a few things that I didn't have answers to before I started this build, in case anyone ends up searching for it in the future.

1) The EPC621D6U-2T16R ships with the SAS controller in IT mode (See attached) RAID Boot.jpeg And the SMART tests work just fine with SATA drives. The commands sas3flash and sas3ircu do not identify the Broadcom SAS 3616. Storcli does identify it. This Thread on STH does mention it in the 3rd gen controllers category, but its behaving like the 3.5 or 4th gen controllers where storcli is how you'd interface to it. But it doesn't matter (at the moment at least) as its been plug and play just fine.

2) The SFF-8654 / Slimline connector on the motherboard appears to work just fine with the adapter board I linked above. (clock jumper in the "sff-9402" position, not the "Intel" position) Though the testing was not extensive, just making sure the NVME drive was identified in BIOS. No benchmarks, etc as the NVME drive was destined to go in the motherboard as a boot device, so that was playing around before the OS was installed. But the bios splits the 8x slimline connector into A and B halves, with the NVME drive being identified in each when it was present.

3) The drives have been staying ~5-8C above ambient with the fan duct and blower fans installed, both during light usage (loading files onto the NAS) and during a SMART long test. Once I finish setting up and getting files on it, I'll check the temps during a scrub. If anyone is interested, I can clean up the 3d models and share them. Much better than what Ive seen on Reddit of people complaining of 40-50C temperatures at idle for WD reds.


Finally,
For doing a SMART test on NVME drive, it appears to be problematic in one form or the other, in other threads, but this error doesn't appear to be the problem of it being difficult to get the results from the drive but it still issuing the command and performing the testing.
(Using 12.0-U7, WDS250G3B0C (linked in first post) for the SSD, in the M.2 slot on the EPC621D6U-2T16R)

Code:
ValidationErrors
[EINVAL] disks.0.identifier: {serial_lunid}21430W802959_e8238fa6bf530001001b448b4136bc2e is not valid. Please provide a valid disk identifier.
remove_circle_outline
More info...

Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 138, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/smart.py", line 403, in manual_test
    verrors.check()
  File "/usr/local/lib/python3.9/site-packages/middlewared/service_exception.py", line 62, in check
    raise self
middlewared.service_exception.ValidationErrors: [EINVAL] disks.0.identifier: {serial_lunid}21430W802959_e8238fa6bf530001001b448b4136bc2e is not valid. Please provide a valid disk identifier.



Should this go to a bug report added to a bug report? or am I missing something?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'd submit a bug report. Middleware shouldn't crap out like that.
 
Top