TrueNAS Scale home NAS/server for above-average use: would this hardware & ZFS setup work?

JossBrown

Cadet
Joined
Jun 30, 2023
Messages
6
I'm planning on getting a NAS/server, and one of the options is to build it myself. For the DIY solution, I would probably choose TrueNAS Scale. (I'm not sold on unRAID—a few compromises I might not be able to live with; one other faint option is plain Ubuntu Server using mergerfs and SnapRAID.) The server, whether a DIY or a turnkey device, is meant to be used for a rather wide range of services, including audio production, possibly with iSCSI (see below), i.e. it should be an all-SSD setup, I think.

I welcome any input or suggestions/improvements. Until now I've never been a real hardware guy, so I don't know if this would all work together, if there are any bottlenecks etc.

Money is a slight issue as far as storage is concerned, this being planned as an all-SSD NAS, so this setup would have to use consumer grade SATA SSDs for main storage, i.e. something like the Samsung EVO 870. (I've read a couple of not-so-nice things about Crucial MX500 lately, though I'm using some of them myself.) There would definitely be more reads than writes—it's for the most part a personal server system, after all—so I think that consumer-grade SSDs shouldn't bother me. (This might be different for iSCSI and production use, but a separate M.2 NVMe pool might be better for this. Might be an option down the road.)

(I) ZFS setup in TrueNAS Scale

(1) OS: 2 x 512 TB SATA-SSDs (mirrored) – or smaller size, of course

(2) macOS Time Machine backup: 1 x 4 TB SATA-SSD (standalone, not part of main pool)

(3) main storage: 8 x 4 TB SATA-SSDs (RAID-Z2)

(4) internal server backup: 1 x 20 TB HDD (standalone, not part of main pool)

(5) L2ARC (metadata only): 1 x 512 TB M.2 NVMe (via PCIe card; see below), e.g. Sabrent SB-ROCKET-NVMe4-500

I assume there should be an additional spare HDD for regular backup hotswapping. (Maybe swap once every 3 months or once a month?)

I have not researched the metadata-only L2ARC deeply yet, so I don't know if I really need it.

(II) Hardware setup (sans storage)

(1) rackmount storage chassis:
Silverstone RM22-312

(2) mainboard:
Kontron K3851-R ATX

(3) CPU:
Intel Core i9-13900T (iGPU, DDR5, ECC, PCIe 5.0, low TDP)

(4) 4 x 32 GB DDR5 ECC RAM:
Micron 32GB DDR5-4800 ECC UDIMM 2Rx8 CL40 (MTC20C2085S1EC48BA1R)
(maybe start with 2 x 32 GB first)

(5) dual SFP28 25GbE PCIe NIC:
10Gtek 710-25G-2S-X8
PCIe 4.0 x8 card in slot #1 (PEG PCIe 5.0 x16)

(6) 8-port SATA III 6Gb/s PCIe card:
n/a, maybe Beyimei ASM1064+JMB575
PCIe 3.0 x4 card in slot #3 (PCIE 4.0 x1/x4)

(7) PCIe to single M.2 NVMe adapter card for the potential metadata-only L2ARC:
n/a, maybe GrauGear G-M2PCI01
PCIe 4.0 x4 card in slot #5 (PCIe 4.0 x4 open slot)

Remaining PCI(e) slots on the motherboard:
(1) slot #2: PCIe 3.0 x1 (open slot)
(2) slot #4: SCSI (not sure what that is – closed slot with 4 lanes of PCIe 3.0?)
(3) slot #6: PCIe 3.0 x1 (open slot)
(4) slot #7: 32-bit PCI x1

Additional hardware:

For a SLOG I would use the motherboard's two internal M.2 NVMe slots (which are gen5, I assume), e.g. with two mirrored Sabrent SB-ROCKET-1TB (gen4). But with the above setup, especially with 128 GB of RAM and SATA-SSDs, I don't think I'll need a SLOG. So the two internal M.2 NVMe slots could be used down the road for fast mirrored iSCSI storage etc., if I ever need it. But at first, I probably wouldn't use them.

(III) Services on the NAS/server (incomplete list)

(Some of the services would obviously need to run in Docker containers.)

* general data storage & project backups (SMB offloading to free space on Macs)
* Time Machine backups (to a standalone SATA-SSD)
* database & phpMyAdmin
* server for music/audio production (possibly iSCSI)
* remote production audio file sharing
* encrypted backup of sensitive documents to remote cloud
* home media server (video & music) with hardware transcoding (Emby or Plex)
* theoretically up to 7 remote users accessing media server
* git server (probably Gitea)
* Vaultwarden
* NextCloud
* VMs (only launched intermittently): Windows XP, maybe Windows 10 or 11
* notes server (probably Joplin)
* CalDAV
* CardDAV
* mail server (IMAP backup server only)
* ad-hoc web hosting/testing, incl. Wordpress
* DNS (Unbound)
* Hotline-style file sharing server (wired Docker)
* Matrix server (probably beeper)
* node for the nostr protocol
* documents server for ebooks
* ClamAV
* regular ZFS snapshots
* et cetera

(IV) Additional notes

* with OpenZFS for macOS, I would be able to mount & access the standalone Time Machine backup SSD on a Mac
* an ext4 volume for the Time Machine backup would be preferable, but afaict TrueNAS doesn't support that
* still researching if there is a way to run macOS content caching on a Linux-based system (doesn't look good for now)
* would love to dive into NFS, but I'm on Mac clients, and NFS is apparently very buggy on macOS; see e.g.: https://www.bresink.com/osx/143439/issues.html
 

JossBrown

Cadet
Joined
Jun 30, 2023
Messages
6
For saving energy, I would probably apply most of the settings mentioned in this article: https://itnext.io/truenas-scale-low-power-setup-db62acbeee69 … Kontron's BIOS (modified Aptio V UEFI BIOS) seems to support most of these, if not all. (Disabling virtualization would however mean no VMs in TrueNAS.)

Since there's only one spinning disk in my setup, which is meant for server backups only, I would manage this in the backup script that (post-backup) will tell the HDD to spin down until the next backup run after ca. 24 hours.
 

JossBrown

Cadet
Joined
Jun 30, 2023
Messages
6
A postscriptum on the SATA ports, which I failed to mention: the Kontron motherboard has 4 internal SATA ports, so the TrueNAS Scale operating system (2-SSD mirror, hotswaps 1+2) and the two standalone drives (1 SSD for Time Machine in hotswap 3, 1 HDD for server backup in hotswap 12) would use the internal SATA ports, which means that the RAID-Z2 storage pool would run completely over the 8-port PCIe SATA card (hotswaps 4–11).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You have a bit too many questions for someone like me who is half asleep. But, here are some answers.

(1) OS: 2 x 512 TB SATA-SSDs (mirrored) – or smaller size, of course
I am guessing you meant 512GB not TeraBytes... Size wise, 32GB to 64GB is perfectly fine. This allows some space to be used on the boot pool, (aka OS storage), for both the system dataset and swap space.


(5) L2ARC (metadata only): 1 x 512 TB M.2 NVMe (via PCIe card; see below), e.g. Sabrent SB-ROCKET-NVMe4-500
Again, I am guessing you meant GigaBytes...
If you truly have 4 x 32GBs of memory, on SCALE that would only be up to 64GBs of ZFS ARC. And if you start with 2 x 32GBs, that's even less maximum ZFS ARC at 32GBs. Using 512GB for L2ARC, even just metadata, is a bit too much. The rule of thumb, is 5 times ARC size. Thus, 320GBs but you can go as high as 10 times ARC size... Just note that the index for L2ARC takes up space out of the ARC, reducing ARCs overall size.


(2) macOS Time Machine backup: 1 x 4 TB SATA-SSD (standalone, not part of main pool)
(4) internal server backup: 1 x 20 TB HDD (standalone, not part of main pool)
You have a couple of single disks, one listed as backup. That's fine. Just understand that the MacOS Time Machine backup has no redundancy... loss of a block likely is loss of a bit of data. And loss of the device, is loss of all MacOS Time Machine backups, (unless backed up elsewhere...).


Your network interfaces look good, Intel type. Not sure is the XXV710 will be well supported... at least today. But, almost certainly will be in the future.


This SATA expansion card is less suitable due to SATA port multiplier;
BEYIMEI PCIe SATA Card 8 Ports, with 8 SATA Cables, Power Splitter Cable andLow Profile Bracket,SATA 3.0 Controller Expansion Card, PCI-E X1 3.0 Gen3 (6Gbps) Controller Card (ASM1064+JMB575)


The list of uses is extremely extensive. Not sure if they will all be supported on TrueNAS SCALE, or even work well together.


You have to be careful about scripted actions, like spinning down a hard disk drive. If the pool is still imported, it is probably that either the OS will cause it to spin back up within minutes. Or cause problems, showing pool errors. Ideally, you would export the pool then spin it down. When time for a backup, spin it up, wait 20 or so seconds and then import it. You should be able to use the CLI / API interface to export and import so the GUI knows about those actions too. (TrueNAS CLI is NOT Unix SHELL... CLI is a command line interface to the NAS software.)


That is all I have.
 
Last edited:

JossBrown

Cadet
Joined
Jun 30, 2023
Messages
6
Thank you for your reply. Some quick answers.

Yes, I meant GB. (Wasn't able to edit my post.)

I'm not even sure if I need L2ARC, even for metadata. But I would not install an L2ARC SSD at first. I'd first want to test how the system is doing without it. If it's not great, I'd increase RAM from 64 to 128 GB. If still not great, only then would I go for L2ARC. (But in the end, I still have to research more about the metadata-only option.)

I knew about the x5 rule-of-thumb, but what's new to me is that the size is based on ARC size. (Always thought it was 5xRAM size.) Thank you for that info. But how do we know the size of the ARC? Afaik it's variable, can use up to 80% of RAM. At any rate, with the above approach—first RAM expansion, then (possibly) L2ARC—, I guess I'd be fine with 512 GB for an M.2 NVMe L2ARC, because it would be based on 128 GB RAM then.

Some tangential thoughts: if at some point I do go down the iSCSI rabbit hole (two internal NVMe slots, mirrored), I might need SLOG for that pool. (At least that's what I've read.) Then the gen4 x4 PCIe slot would be blocked by the one NVMe that I've currently assigned to a potential L2ARC. So I'm now thinking that if I ever go L2ARC, I should probably better use one of the board's two gen3 x1 PCIe slots for an NVMe adapter, which would give me about 800 MB/s to 1GB/s speed for the L2ARC. Example: the GloTrends PA09-X1
The gen4 x4 PCIe slot would then be free to house a dual NVMe adapter for two mirrored NVMes as the SLOG for the iSCSI pool (see above, internal NVMe slots).

Losing backups on a single disk wouldn't be a big deal: (1) for the Time Machine SATA SSD, I'd still have the current version of my files on my Macs, plus regular cloned backups to a DAS (using Carbon Copy Cloner); I'd just lose older versions of a few files, maybe; (2) for the actual server backup (onto the single HDD) I still have the current server contents, but I should also have a second HDD, and both HDDs should be hotswapped regularly, so if one backup drive fails, I'd still have the second one (with slightly older data) in a safe place.

Thank you for the XXV710 info: I have to monitor development regarding support. My original idea was to have a dual SFP+ 10GbE card, but since the router (MikroTik) will have dual SFP28, I chose the faster NIC.

Regarding the SATA card, yes, that's the thing I feared would be the biggest problem. I'd have to look for a better 8-port PCIe 4.0 x4 (hope those even exist).

The list use cases is extensive, I know. I'd have to start with the important ones, then inch my way forward, and see how it works out.

Scripted action: there is one spin-down script for TrueNAS that I would implement, provided it works; see here: https://github.com/ngandrass/truenas-spindown-timer
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
(2) macOS Time Machine backup: 1 x 4 TB SATA-SSD (standalone, not part of main pool)

(3) main storage: 8 x 4 TB SATA-SSDs (RAID-Z2)
Why would you want these to be separate? Put all nine SSDs in the same pool, and create a dataset on that pool for time machine. Set a quota on it if you like.
8-port SATA III 6Gb/s PCIe card:
Use a SAS HBA for this.
For a SLOG
I don't see anything in your use case that sounds like it would benefit from SLOG. If you do use it, it's pretty particular in what it requires of the devices, with power-loss protection being the proverbial long pole in the tent.
dual SFP28 25GbE PCIe NIC:
I suspect this is pretty optimistic; I doubt your system would routinely be able to saturate 10 GbE, much less 25 GbE. But the card should be well-supported.

Your motherboard choice doesn't look particularly good, though it's a brand I don't think I've seen before, and I'm not sure the 12th/13th gen Intel CPUs are a particularly good choice at this time either. I think you'd be better off, and likely save some money, going a generation older on the motherboard/CPU and at the same time stepping up to a server board.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
My 2p's worth:
  1. CPU / Motherboard is too new. You are paying for the newness and you don't need to
  2. Your SATA expansion board is shit. Use a proper HBA as others have said
  3. Run without L2ARC to start with but 128GB of ECC is a good start
  4. SLOG - nothing I can see will use one.
  5. NIC - nice expensive card. Should be well supported. May, as others have said, be a bit optimistic - but no harm in that
  6. As @danb35 says - put all the disks into a single pool and apply a quota if you feel like it
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
* an ext4 volume for the Time Machine backup would be preferable, but afaict TrueNAS doesn't support that
That's correct.
* still researching if there is a way to run macOS content caching on a Linux-based system (doesn't look good for now)
I'm not sure what you mean here, though it's possible to run macOS in a virtual machine under KVM (which is the virtualization system TrueNAS SCALE uses), and I think there are some folks here who have gotten it working that way. So if you can't do whatever it is directly under Linux, you could spin up a macOS virtual machine and do it there, optionally storing the data itself on your NAS rather than inside the VM.
would love to dive into NFS
Is there a particular reason you'd prefer that to SMB? I'm not saying you shouldn't, as such, but it seems a little odd to me that you'd "love to dive into" any particular file-sharing protocol. Use what works; if NFS is buggy under macOS, then that isn't "what works."
* server for music/audio production (possibly iSCSI)
I assume your goal here is that the storage would be on the NAS, not that the music/audio production would be on the NAS. Whether iSCSI or some other protocol would be best is going to depend on your particular needs, but one thing to be aware of is that iSCSI doesn't generally play well with multiple clients (although this is filesystem-dependent; if you use a cluster-aware filesystem like Ceph or Gluster with its associated software stack, you'd be fine). So if there's one client machine on which you're doing production, and that's the only one that needs access to that data, iSCSI may make sense. But if there are multiple machines that may need access to that data, particularly if that need is simultaneous, iSCSI is probably not the protocol for you.

Most of the uses you list are going to involve apps, and most of those will probably be from TrueCharts. While they have a large and featureful app catalog, you should be aware that they don't seem particularly hesitant to implement breaking changes that will require you to reinstall your apps. I presume this will be less frequent as their catalog matures, but you should still be prepared.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I knew about the x5 rule-of-thumb, but what's new to me is that the size is based on ARC size. (Always thought it was 5xRAM size.) Thank you for that info. But how do we know the size of the ARC? Afaik it's variable, can use up to 80% of RAM. At any rate, with the above approach—first RAM expansion, then (possibly) L2ARC—, I guess I'd be fine with 512 GB for an M.2 NVMe L2ARC, because it would be based on 128 GB RAM then.
...
There are differences between OpenZFS on FreeBSD, (used in TrueNAS Core), and OpenZFS on Linux, (used in TrueNAS SCALE). The one difference that I making reference to, is the ARC size. By default, OpenZFS can use most of the memory in Core / FreeBSD. But, in SCALE / Linux, it is limited to 50% of memory size. Thus, I adjusted the 5x / 10x rule of thumb for ARC size verses L2ARC size. Perhaps it does not mater, but I thought it might.

iXsystems is looking at the reasons why OpenZFS on Linux has limited it's ARC size to 50%. Only time will tell if their is a solution to that problem.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
(5) L2ARC (metadata only): 1 x 512 TB M.2 NVMe (via PCIe card; see below), e.g. Sabrent SB-ROCKET-NVMe4-500

(7) PCIe to single M.2 NVMe adapter card for the potential metadata-only L2ARC:
You don't really need metadata L2ARC given that you will be using SSDs; you can save money there (and add it later if you find yourself in need).

If you haven't, please read the following resources regarding high speed networking: evem if it's not of direct interest to you it still contains a few good infos that might help you.
 
Top