Mini X+ Build - Question on planned layout with added hardware

kaji22

Cadet
Joined
Feb 26, 2021
Messages
4
Greetings,

I am planning to build a new NAS and I have my eye on the TrueNAS Mini X+ with the hardware additions and planned configuration listed below. The idea is that this ends up being a TrueNAS Scale machine with GA hopefully arriving later in the year, but I see no reason why it would not work on Core also. I have read a number of posts on these forums and elsewhere so I think I am making decent decisions regarding the hardware and layout, but I am hoping someone with some more experience might notice if I am making a potential mistake.

I have 2 main questions...

1) Are the hardware choices below sensible for OpenZFS with the proposed capacities, l2arc, and slog?
2) Might I run into any gotchas regarding the partition layout, particularly with the multipurpose usage of the 2 SSDs?

Thanks for reading!

Hardware components
  • TrueNas Mini X+ w/ 8 cores, 64 GB RAM, dual 10GbE NICs
  • 5x: Western Digital Red Plus 10TB 3.5” HDDs
  • 1x: Intel Optane 800P 118GB M.2 PCIe 3.0 NVMe
  • 2x: Samsung 870 EVO 1TB 2.5” SSDs
Pools Layout
  • zroot pool
    • mirror on Samsung SSDs (is 200GB plenty?)
  • tank pool (media, backups, other file storage/shares, etc.)
    • raidz2 pool on WD Red HDDs (30 TB usable)
    • l2arc on Intel Optane NVMe (118 GB)
    • slog as mirror on Samsung SSDs (small, maybe 10GB?)
  • compute pool (for VMs, containers, etc.)
    • mirror on Samsung SSDs (remainder after zroot/slog, ~790 GB)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hello,

The "appliance" nature of TrueNAS means that the boot device isn't capable of being used for other purposes (at least not officially - it can be rigged to do it if desired, but you have an alternative here) so the idea of combining your boot pool and other workloads is out the window. Fortunately, if you buy a TrueNAS Mini from the iXsystems store, it should come with a small DOM (Disk On Module) SSD that is used as the boot device.

The desired devices for your L2ARC and SLOG should also be switched; the EVO's aren't viable SLOG devices and the Optane stick will significantly outperform them in both performance and endurance.

I would set it up as below:

Pools Layout
  • truenas-boot pool
    • The included SATA DOM (16GB/32GB)
  • file-storage pool (media, backups, unstructured files)
    • RAIDZ2 on 5x disks (30TB usable)
    • no SLOG or L2ARC (as these workloads likely won't benefit from them)
  • compute-pool
    • mirror on full capacity of Samsung SSDs (1TB)
    • SLOG on Optane NVMe card
Side note - Using "tank" for your pool name might lead to confusion down the road with example commands. Give your pool a descriptive name. :)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Conventional wisdom has it that L2ARC is to be used only under specific circumstances. I have no first-hand experience, but many people here that I would trust with their judgement, say that the absolute minimum for its use is 64 GB. While that is what your setup has, I would assume that expanding RAM is typically a better way to pursue.
 

kaji22

Cadet
Joined
Feb 26, 2021
Messages
4
Hi HoneyBadger, thanks a bunch for your reply!

The "appliance" nature of TrueNAS means that the boot device isn't capable of being used for other purposes (at least not officially - it can be rigged to do it if desired, but you have an alternative here) so the idea of combining your boot pool and other workloads is out the window. Fortunately, if you buy a TrueNAS Mini from the iXsystems store, it should come with a small DOM (Disk On Module) SSD that is used as the boot device.

Ahh yes, that is a very good point. And it is super convenient that the DOM is already there to use for the zroot so I can just skip relocating that altogether.

The desired devices for your L2ARC and SLOG should also be switched; the EVO's aren't viable SLOG devices and the Optane stick will significantly outperform them in both performance and endurance.

I see. The performance aspect does make sense. But would it not be a concern that there is only one NVMe and the SLOG has no redundancy? From a few things I have read, it was mentioned that L2ARC doesn't need fault tolerance but SLOG does.

I would set it up as below:
  • compute-pool
    • mirror on full capacity of Samsung SSDs (1TB)
    • SLOG on Optane NVMe card

Originally, I had SLOG included for use with the larger spinning disk array since I expect large sequential writes on the slower disks with double parity to be especially slow in comparison to the Optane SLOG. But this setup would suggest it would benefit the compute pool more even though that is already on SSDs. I'm a bit new to these concepts so I believe you, but I admit that I don't understand why this is.

The plan for the files that end up on the compute-pool would end up being a mix of docker images/volumes and/or kubernetes pods and volume claims. Also potentially a KVM disk image or two but only for things that I am unable to deploy via docker/kubernetes.

The SLOG's role here I assume is just for preventing data loss on loss of power rather than a performance increase? I don't know the performance benefits if any since the pool is already SSD.

Also, the Optane's capacity is 118 GB. Would there be any benefit to splitting that in half and letting the second slice be SLOG for the HDD file-storage pool also? From readings, it seems that very little capacity in a SLOG device is actually used at any given time. I guess the downside would be that the bandwidth to/from the NVMe would then be shared between two pools but writes to the file-storage pool would be quite infrequent compared to the compute-pool where maybe it would not matter.

Thanks again!
 

kaji22

Cadet
Joined
Feb 26, 2021
Messages
4
Conventional wisdom has it that L2ARC is to be used only under specific circumstances. I have no first-hand experience, but many people here that I would trust with their judgement, say that the absolute minimum for its use is 64 GB. While that is what your setup has, I would assume that expanding RAM is typically a better way to pursue.

Hi Chris, thanks for your reply! Yes I am not certain how much or if my use case would benefit from L2ARC. My layout I had in my original post has it but I admit it was mostly included because there just happened to be a M.2 slot available and it seemed like a potential way to increase performance for low effort / cost. 64 GB RAM is the largest "configuration" for the Mini X+ upon purchase. But I don't know what the maximum supported would be on its motherboard. Also, given my pool sizes of 50TB/30TB Raw/Usable HHD and 2TB/1TB Raw/Usable SSD, it's unclear to me if adding more RAM would have a significant impact.
 

kaji22

Cadet
Joined
Feb 26, 2021
Messages
4
After giving the above suggestions much more consideration and going a good bit of further reading, I've come up with two alternate scenarios that address the shortcomings of my original layout and don't include anything crazy expensive like adding a 905P in addition to the 800P M.2. The shortcomings in my original plan were that I included a small size L2ARC which was likely pointless and my SLOG was not taking advantage of the device with the fastest write latency. I landed on that scenario partly due to thinking that the SLOG needed to be on a mirror no matter what without considering that the Optane is data safe during a power loss. The reason that there are two scenarios mostly has to do with where I locate my "compute" pool. The hardware is exactly the same as in my original post, but I will include it here for reference followed by the two scenarios.

Hardware components
  • TrueNas Mini X+ w/ 8 cores, 64 GB RAM, dual 10GbE NICs
  • 5x: Western Digital Red Plus 10TB 3.5” HDDs
  • 1x: Intel Optane 800P 118GB M.2 PCIe 3.0 NVMe
  • 2x: Samsung 870 EVO 1TB 2.5” SSDs

Scenario A

file-storage zpool

vdev typevdev topologyDevicesNet CapacityNote
storageraidz25x 10TB 3.5" HDDs30 TB
slogsingleIntel Optane 800P M.2 SSD (slice 1 50%)59 GBFirst half of the Optane SSD
specialmirror2x Samsung 870 EVO SSDs (slice 2 10%)100 GBSmall (10%) slice of EVO SSDs, metadata only, no datasets with special small files

compute zpool
vdev typevdev topologyDevicesNet CapacityNote
storagemirror2x Samsung 870 EVO SSDs (slice 1 90%)900 GB10% of capacity used for file-storage zpool special vdev
slogsingleIntel Optane 800P M.2 SSD (slice 2 50%)59 GBSecond half of the Optane SSD

Pros
  • All compute-pool data is on faster SSDs
Cons
  • SLOG device shared between two pools. Potential hinderance to write latency if both pools under heavy sync write load. Though in my use case I think this would have minor impact at most since the storage-pool would be predominantly async.
  • No L2ARC

Scenario B

unified file-storage and compute zpool

vdev typevdev topologyDevicesNet CapacityNote
storageraidz25x 10TB 3.5" HDDs30 TB
slogsingleIntel Optane 800P M.2 SSD118 GB
specialmirror2x Samsung 870 EVO SSDs (slice 1 50%)500 GBmetadata for all datasets, special small files for compute data sets
l2arcsingle (x2)2x Samsung 870 EVO SSDs (slice 2 50%)1 TB (slice 2 of each SSD)

Pros
  • Has L2ARC available to both file-storage and compute datasets
  • Flexible distribution of storage capacity between file-storage and compute tasks
Cons
  • Compute pool data blocks are on spinning HDDs rather than SSDs. However the metadata and small files will be on SSD which should mitigate the impact of using the slower disks to some degree
Summary

The main difference between the two scenarios is giving up pure SSD storage for the compute pool in trade for L2ARC across the board. The negative part of that trade-off minimized to some degree by offloading the metadata and small files to SSD. The positive of having L2ARC even for the file-storage datasets is likely not great, but I am making an assumption that it could help in a scenario such as if a container is accessing file-storage data that is exported via NFS. Or something like that...

Questions

As I am definitely no expert with TrueNAS or ZFS, I consider everything above to be potentially flawed and possibly stupid. :)
  1. First and foremost, do the scenarios check out on a technical level? Am I making some sort of assumption or doing something that is a strict no-no with ZFS?
  2. How badly will the compute datasets be impacted in Scenario B with them on HDDs? I expect them to be less performant on any large file operations of course. But I am curious if the result is more than acceptable for random reads given the combination of offloaded metadata, small files, and ram/l2arc.
  3. Is there a Scenario C using the same hardware (or relatively the same) that would yield a better result than A or B that I haven't considered?
Thank you for taking the time to read this. If you have any suggestions or comments, please do not hesitate. If not, I hope this was at least an interesting read.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The SLOG options are a sliding scale of risk - most risky is "no SLOG, async writes" and then "single SLOG" and finally "mirrored SLOG" - the SLOG is only ever read from (and depended on) during a boot/import after an unexpected crash, and against it's only for sync-write data such as potentially VMs. But in terms of overall write endurance the Samsung 870 EVO

Asynchronous writes will always be the fastest option, if data-in-flight integrity isn't critical, but it's only appropriate for data that can be recopied. If it's just media files that are being copied to the share or read from it, the sudden crash would just require you to copy the file again. But if you're editing files in-place on a network share, or running VMs (also Docker/K8s) where they're writing to a "virtual disk" stored on TrueNAS then that data can't be "replayed" later.

I took a look at your updated scenarios - of the two, I like A more than B, but I genuinely don't believe that you need the L2ARC or special devices on your spinning-disk pool, given the described use-case of "media, backups, and files." I don't think you need the SLOG there either, to be honest, but splitting the Optane between the two is likely to work out better than shaving off 10% of the Samsung drives as special.

The challenge with wanting to split off a "piece" of your Samsung SSDs is that mixed read/write workloads can hurt the SSD's performance. NAND is definitely better at handling it compared to spinning disks, but it's not perfect by any means. (Optane is an exception, but you pay for that privilege!)
 
Top