Couple Sanity Check Questions

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
It's been a while since I've used --well... I've never used TrueNAS. I went back through my user/posting history and it shows a couple posts back in 2012, then a few in 2015. I used FreeNAS a bunch back in 2010-2015 time range, then I sold out and moved over to SMB desktop NAS devices. The current hardware I have is starting to fail, I'm nearly out of space and I need more performance so I thought I'd build a TrueNAS CORE machine for a year or three... Learn, make a bunch of mistakes, etc... Then in a few years, TrueNAS SCALE will be a little more mature, and I'll know enough to install/build a decent TrueNAS SCALE storage machine. So that's the plan... Something small-ish, mostly for ESXi VM's (NFS) for a year or so, then eventually move over to a new build on TrueNAS SCALE.

I have a bunch of 2TB 3.5" drives. A lot --probably thirty (30) in total across all machines. They are OLD. They are the Seagate vintage with the lawsuit over all those drive failures. Date codes are 2010-2012. I saw someone post about making a triple-mirror setup for SSD. As I understood it, it was basically a HW R-10 (striping mirrored pairs) except striping mirrored three-pair-triplicate-drives-??? I expect a decent failure rate but what else am I going to do with the drives? Drill holes in them and bin them? What I understood of it was the following:

vdev0:
drive0 <-mirror-> drive1 <-mirror-> drive2
vdev1:
drive3 <-mirror-> drive4 <-mirror-> drive5
vdev2:
drive6 <-mirror-> drive7 <-mirror-> drive8

And then stripe the above vdevs.

Question #1 would be: Is striping three drives like that "a thing"?
#2: Sanity check... In a normal 2-drive mirror then stripe setup (like raid10), I can add and subtract vdevs from the pool correct? If I have three mirrored pairs striped, I can add a 4th mirrored vdev pair to the stripe pool correct? Can I remove vdevs from a pool? How about changing size? If I have 3x two-drive mirrors with 2tb drives, when all of the 2tb drives are gone/dead, can I buy two 4tb drives, mirror them in a vdev, and add that 4tb mirror into the pool with the 2tb vdevs?

#3: vdevs have to be like-like correct? So if do a three-drive mirror as shown above, if I wanted to buy two new 2tb drives, or two new 4tb drives... That won't work because a vdev of two drives mirrored =! a vdev of three drives mirrored. Is that logic correct? vdevs in a pool must match? And **I think** mirrored vdevs can change size across a striped pool, but **I think** that other vdevs like rz-n need to be same:same, identical in size, type, arrangement, etc. when added to a pool? So if I wanted to add a vdev of 4tb drives to a pool with all 3tb drives... No big deal... Just partition/format down the 4tb drives to match the size of the 3tb vdevs, and present the 4tb vdev to the system as if it were 3tb? (hope that made sense)

#4: Back when I used FreeNAS last, stuff like Optane and NVMe didn't exist (or was not at all mainstream). I was paying a lot of money for 64gb & 120gb nand ssd. I have a 10gbe network. The data and traffic is going to be vmware and nfs. I have a xeon e-2246g on a supermicro board (c242 chipset) I can re-purpose for a TrueNAS Core build. The board has one pcie x4 m.2 slot. Will I be fine running that e-2246g cpu with 128gb ecc, some fast-ish 1tb nvme drive with plp, and then an hba with a pool of striped mirrored pairs (or triplicate mirrors)? I feel like my brain is mush after reading so much about slog/zil, l2arc, etc. I only have 10gbe network. Do I need anything more than max RAM, a decent-ish (with plp) nvme for slog, and a solid hba card?

#5: I haven't really found much about BSD support for Intel Quick Assist or BSD-friendly HW encryption & accelerators. Am I looking in the wrong place?

Thanks.

#6: Last question --SCALE... Who is it focused at? What wife-acceptance-factor will it have? Is it aimed at SMB or is it more of a home/retail integrated product? I'll probably try it and install it on something to check it out so mostly wondering about the focus of the project. She recently upgraded her laptop and phone (apple) which caused issues with some of our older NAS devices (security, old smb/nfs protocols, etc.). I'm wondering if TrueNAS Scale is going to be focused at the retail NAS markets with easy gui's, packages, etc. that's family-friendly, or will it be a little more SMB/enterprise (not as WAF friendly)? Thanks.
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
Hmm. No comments? Does that mean I'm pretty much on-point, or did I type too much, its around Black Friday and everyone is too lazy to read what I wrote?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Some of us were a bit busy because of the holidays. If it was fewer questions or shorter questions, some of us close to food comas might have had some brain power to answer.

I'll give some a try:

#1 - Yes, 3 drive Mirrored vDevs are fine. Striping is automatic when more than one vDev is available

#2 - Yes, you can add a 4th vDev, Mirrored or RAID-Zx. But, at present, you can only remove mirrored vDevs in a pool that consists of Mirrors with the same configuration. Basically if you want to remove a 3 way Mirrored vDev, all the OTHER vDevs have to be 3 way Mirrored vDevs, (if I understand the feature correctly).

#3 - vDevs are SUGGESTED to be similar or same. But, at a ZFS level you can have RAID-Zx & Mirrored vDevs in the same pool. I don't recall the status of the GUI, but it likely will prevent mixing Mirrored & RAID-Zx in the same pool. You can have 2 way Mirrors in some vDevs and 3 way Mirrors in other vDevs, (except in the case of removal, see #2 above). The disks in a Mirrored vDev don't have to be the same size, but ZFS will use the lowest sized disk for the vDev's size. But, other vDevs can be smaller or larger.

#4 - Sorry I can't answer this one.

#5 - Can't answer this one either

#6 - iXsystems is a business, so I would guess that TrueNAS SCALE is prompted by either client requests, or perceived need. I doubt it's going to be more user friendly that TrueNAS CORE. In fact, I expect it to be 5 times worse, IF you include the storage clustering, VMs, and container support. That might change in 2 years. But, for the moment, it's Beta, with CORE is solid, (for the features it has).


I've left out lots of misc. details, like vDev striping is not guaranteed. ZFS will try to balance the amount of used space in each vDev. Simple example. You have 1 vDev, it's getting full. So you add a second, ZFS will prefer the second because it has less used space. ZFS will not automatically migrate / balance existing used space across newly added vDev(s). That's the "holy grail" ZFS does not have at present.
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
Some of us were a bit busy because of the holidays. If it was fewer questions or shorter questions, some of us close to food comas might have had some brain power to answer.

I'll give some a try:

#1 - Yes, 3 drive Mirrored vDevs are fine. Striping is automatic when more than one vDev is available

#2 - Yes, you can add a 4th vDev, Mirrored or RAID-Zx. But, at present, you can only remove mirrored vDevs in a pool that consists of Mirrors with the same configuration. Basically if you want to remove a 3 way Mirrored vDev, all the OTHER vDevs have to be 3 way Mirrored vDevs, (if I understand the feature correctly).

#3 - vDevs are SUGGESTED to be similar or same. But, at a ZFS level you can have RAID-Zx & Mirrored vDevs in the same pool. I don't recall the status of the GUI, but it likely will prevent mixing Mirrored & RAID-Zx in the same pool. You can have 2 way Mirrors in some vDevs and 3 way Mirrors in other vDevs, (except in the case of removal, see #2 above). The disks in a Mirrored vDev don't have to be the same size, but ZFS will use the lowest sized disk for the vDev's size. But, other vDevs can be smaller or larger.

#4 - Sorry I can't answer this one.

#5 - Can't answer this one either

#6 - iXsystems is a business, so I would guess that TrueNAS SCALE is prompted by either client requests, or perceived need. I doubt it's going to be more user friendly that TrueNAS CORE. In fact, I expect it to be 5 times worse, IF you include the storage clustering, VMs, and container support. That might change in 2 years. But, for the moment, it's Beta, with CORE is solid, (for the features it has).


I've left out lots of misc. details, like vDev striping is not guaranteed. ZFS will try to balance the amount of used space in each vDev. Simple example. You have 1 vDev, it's getting full. So you add a second, ZFS will prefer the second because it has less used space. ZFS will not automatically migrate / balance existing used space across newly added vDev(s). That's the "holy grail" ZFS does not have at present.
Thanks for the reply Arwen... I was kidding around (a little) about people not replying to my post. I do realize it was long, and black/cyber days, etc. Thanks for reading & replying. It was a big help clearing a few things up. I had planned on starting small and adding mirrored vdevs as the system grows... But after reading your reply, I bit the bullet and bought 9x (one spare) matching hdd so I can run 4x mirrored vdevs.

I'm still a little iffy about my build and not quite sure where to get more info regarding slog and l2arc. For slog I'm not sure on sizes, transaction groups, etc. I'm also not sure on slog media --is a fast nvme enough or do I want something more? nvdimm is out of the budget but how much of a performance increase will I see by going to an Intel DC pcie ssd vs. a "fast" nvme m.2 device with plc (pcie x4 cpu lanes). This is for virtual machines and at this point, I'm concerned more about latency than bandwidth.

Thanks for the reads and the reply.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First up, when you have the chance, "burn in" your new hard drives. If any fail now, easier to send back and get a replacement. And yes, occasionally a brand new drive will fail. Their might be a forum thread on burning in hard drives.

Next, L2ARC is really only useful if you max out your RAM first. The pointer / directory table for a L2ARC is in RAM. So a too big L2ARC with limited RAM means you have even less RAM. The one exception is using the L2ARC for "metadata" only. That's what I think I will use a SSD I have, L2ARC for "metadata" only. (But, I also have 64GBs of RAM, though not the max the board takes...)

SLOGs are useful ONLY for synchronous writes. Some NFS and iSCSI applications. And yes, NVMe is fast enough. Even a good SATA SSD with PLP, (Power Loss Protection), is better than nothing when using spinning rust as main storage. Their is a sticky thread or 2 somewhere here in the forums on SLOGs.

If you plan on using your pool for VMs, using 4 vDevs of Mirrored disks is a better choices than RAID-Zx. Or fewer vDevs.


One last note, if you have another ZFS dataset for critical data that is not too large, you can reduce the risk of loss by using "copies=2" on that dataset. It will write 2 copies of each data block irrespective of the pool configuration. Which is 2 way Mirroring in your case, and will attempt to put the extra copy on another vDev. My personal writings and some other things might be important enough for me to do that eventually.
 

winstontj

Explorer
Joined
Apr 8, 2012
Messages
56
Thanks again for continuing to reply to my posts. Drives came yesterday. I'm still scarred from 10-ish years ago when Seagate had all those drive failures so yes, I'll burn them in.

My biggest concerns are latency and hammering on spindle drives with random I/O. Well... And also that I need a xeon e-2100 or e-2200 CPU which seem to be like hen's teeth these days... I ended up going with spinning rust over SSD --not so much because of cost but because had I built a NAS that was SSD only it would be a waste of money because I only have a 10GbE network so I couldn't possibly use the bandwidth that a SSD-only NAS would provide. I was considering a ~10tb ssd nas --or even nvme, because of size, footprint, no moving parts, etc. Also because as I understand it, the entirety of ssd/nvme media is accessed electrically, all-at-once vs. with spindle hdd, only one track on the platter can be written to or read from at a time. My assumption was that while I would be wasting money with an all ssd/nvme NAS array, it would mitigate any/all of my concerns about latency (well, most of the latency concerns) and remove all concerns about random I/O.

But the cost started to get too crazy. We just aren't quite there yet with affordable all nvme/ssd (or nvdimm) storage. (at least for my budget)

So I went with two pools. Pool #1 (or pool zero) will be 4tb 7200rpm drives --I'm still trying to figure out the performance difference between 4x, 5x or 6x mirrored pairs. I understand more spindles is better, but how much better will 5x or 6x mirrored pairs be compared to 4x (8 drives) mirrored pairs? I can't find any good metrics/numbers/data on those figures.

Pool #2 (or pool one) will be 4x (two mirrored pairs) of 8tb 5400rpm red drives. That will be our "house stuff", personal documents, some movies, music, try and hook Apple backup to that array/pool, etc.

The board is c242 chipset and I have 128gb ddr4. I **should** be OK with memory. That said, I own the mobo and I own the memory, but I haven't bought a CPU yet so I'm open to alternatives.

I understand SLOG (is not cache) and I have seen/read about l2arc for metadata --I just haven't found very many "recipes" or builds with parts and performance numbers behind them... specifically for running ESXi VM's 24/7. I don't know what size for slog/l2arc --you can buy a 16gb optane m.2 stick for $22 on Amazon. If I put 16gb optane m.2 on pcie x4 lanes to the CPU will that be good/fast enough --and large enough space for L2ARC metadata only?

SLOG: I have basically same question as I do with L2ARC. Is 16gb optane stick via m.2 and pcie x4 to chipset enough space and fast enough? I've read that people put small files on their SLOG device?? Something like max of 512kb maybe?? Maybe 64kb?? (small). Obviously the larger the file size, the larger the drive space needed. Is there a rule of thumb for SLOG drive size per TB of data? Is there a max file size to allow to be written to SLOG device?

Thanks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Assuming you actually could benefit from a L2ARC or SLOG, they have different device criteria.

SLOG
- write endurance, (Optane is supposed to have great endurance)
- write speed, (SLOGs are only read on server crash, so read speed is somewhat not important)
- power loss protection is a desirable feature
- small is fine, like 16GBs, as it will only store a few ZFS transactions
- if data is important enough, mirrored SLOGs
- loss of SLOG during normal operation is fine, no data loss, synchronous writes simply slow down.

L2ARC
- read speed over endurance.
- with 128GBs of memory, >=60GB of L2ARC is acceptable. Even 240GB. But, too big takes too much memory for indexes.
- loss of L2ARC simply slows down the pool's reads until the replacement is filled.
- a pool can have multiple L2ARCs, that work in round robin, (though only 1 will have specific cached data)
 
Top