First TrueNAS Build - Would like input on my hardware selection

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
My first ever TrueNAS build, I've done a bit of research and I think I'm along the right lines. Anything anyone can spot that could be improved here? I'd also like your opinions on what drives I should get, and what I can use with existing drives.

I already own 3 x Intel DC S3160 800GB SSD's. I have no plans for these

Chassis: SuperChassis 826BE1C-R920LPB (12 x 3.5" in the front, 2 x 2.5" in the rear, 2 x 920w PSU's, and a BPN-SAS3-826EL1 backplane with is an expander and SAS3!). I've already bought this for $360 inc shipping. I also have a bunch of misc spare Supermicro power supplies at home anyway. I'll need to buy rails at $90
Motherboard: Supermicro X11SSH-F (The downside with this to me is the x2 M.2 slot. But I can't find a good alternative)
CPU: Xeon E3-1240 v5 - This seems more powerful than it needs to be, but I don't want to go too low now and then regret it. Also I can't find a CPU much cheaper than one of these at $80
RAM: 64GB DDR4 ECC Unbuffered (16 x 4) of some description. I do wonder if I can get away with 32GB though
Heatsink: SuperMicro SNK-P0046A4. I have a system with one of these already, they are $20 and work well
HBA: LSI 9300-8I - $158
NIC: Mellanox ConnectX-4 LX CX4121A. These cost $180 but they seem to be one the best NIC's around, and support 10G SFP+ as well as SFP28 25G. Good for the future.
Boot Drives: 2 x Intel SSD 330 Series 60GB. I already owned one, and I found a second for $15 delivered.

Does that seem like a solid start? I only require about 3TB of total storage, so I'm trying to figure out the best deal on drives. I want to go with something very reliable, but also fast, and something I can get spares for a while.

ZIL/SLOG will probably end up Optane, and I'll go with an M.2 L2ARC SSD and just find a good quality SSD.

Still unsure what I should do with those S3160 800GB drives.

Would love your thoughts. So far my costs are at $1455

Tl4G1x0.png
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Bonus question. How much will the L2ARC and ZIL/SLOG help? My instinct is to buy the best value LARGE disks I can find, which right now are 10TB disks. But It seems crazy to get something like 6 of them when I need such little storage. Would I get alright performance with 4 x 10TB Disks in RAIDZ2?

My other alternative is to buy a bunch of 2TB SAS disks, they are on ebay for just $18~ a piece, $9.50/TB. I could grab 6 of those and I assume I'd get good performance. But it's just not very dense at all, thats a lot of rust for such a small amount of storage. Perhaps 4TB SAS disks at around $50 a pop would be better. That would be just $300 and around 15TB of storage. Performance there should be pretty good right?
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
Houston? Was down there yesterday... My glasses kept fogging up.. :cool:

Good stuff in your build for sure, but the first question you have to ask... What is it going to be doing? That sets the expectations...

A couple quick thoughts:
1. You mention RAIDz2 and 10GbE. Is your use case mostly read only, or transactional writes? The thing about RAID is the write performance of a single vdev device is generally equivalent to the write performance of a single component drive. A single spinning rust drive don't touch 10GbE performance. In a read heavy application, you'll want to match the transfer budget to the number of vdevs in the pool to hit the read rate you need. If you're doing iSCSI block storage, just forget RAIDz and set up multi-vdev mirrors.

2. SLOG/ZIL... The SLOG is a unique corner case in ZFS. To understand it's history you have to understand the NFS file sharing protocol and Sun Microsystems PrestoServ battery backed RAM stunt card back in the 90's. NFS by the design spec, mandates all writes be performed synchronously or "O_SYNC" in C/C++. The data has to be applied to the device and confirmed before the OS can move on to the next step. Windows SMB shares will not benefit from a SLOG. Only iSCSI & NFS shares, and Jail/VM activity that use O_SYNC writes benefit from a SLOG.

3. RAM... More is usually better, but the questions people will ask are: Are you running VM's? Jails? Planning on using De-dupe? Always max out RAM before reaching for a L2ARC.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Thanks for the feedback.

For 1, its mostly read. But if I want good write performance, does that mean that a decent chunk of RAM, and possibly a nice big L2ARC device should be a priority? The amount of writes to this pool I will want to happen at 10GbE (Actually, 25GbE would be nice...) but it would not be for very long at all. The total data set is less than 3TB currently

For 2, good information. I may do some iSCSI into ESXi in the future, but right now it will be just SMB. I guess I will completely skip a SLOG and then just add it back in if needed in the future

For 3. I'll try go for 64GB, but I may break it up into 2 purchases and upgrade a little later. No VM's, No Jails, and I'm not planning on using dedupe.

if I do multiple vdev mirrors, is that akin to RAID10, where I could lose 2+ drives and be okay, but I could also lose 2 drives and be very unlucky and lose the whole pool? This storage must be able to withstand losing 2 drives

Once I went on vacation to Big Bend National Park and my Synology emailed me about a failed drive on the way there, and then on the way back I got the email for a second. Since then, I've always wanted RAID6 equivalent
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Was just doing some more reading, so L2ARC is for read performance.

Is there any way to get an SSD in the mix for write caching? Perhaps I need to re-evaluate my disk configuration here to optimize writes, if I can't cheap with SSD's
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Okay, more research done.

It looks like the ST6000NM0034 6TB SAS drives write at around 221MB/s best case. If I do three mirrored vdevs I should in best case scenario's get 663MB/s write assuming no caching help, and well over 1300MB/s read speeds

That gives me around 18TB of space, which seems fair
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
There's different ways of dealing with the redundancy problem. With the mirror, you can sustain multiple failures as long as each vdev remains redundant. You can set up RAIDz2 vdev's capable of sustaining double failures, and then use multiple vdev's to gain performance. ZFS will round-robin between vdev's. Another option is to assign a hot spare. In that vacation scenario your NAS would have kicked the hot spare in to the pool and started re-silvering while you were on vacation. Of course the re-silvering load/stress could tip another drive into failure. I try to avoid having a pool of drives that are all the same age, but that's not always possible.

Another thing you may wish to consider is splitting your data between pools. I have a performance pool and a bulk storage pool. A pool of 9 x 500Gb SSD's set up RAIDz in three vdev's gives you your 3Tb with speeds as a multiple of the individual drives. If your workflow allows, you can archive to the spinning rust pool as a daily/weekly replication task.

When you get to your ESXi iSCSI experimenting... You can attach a SLOG after the fact. Of course with an SSD pool there's not much point.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
I have I have pool full of mirrored vdevs, can I replace those mirrors with larger drives one at a time do you know?
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
I have I have pool full of mirrored vdevs, can I replace those mirrors with larger drives one at a time do you know?

You can replace them one at a time, but the pool will only increase in capacity when both mirror halves in the vdev are upgraded.

So if you start with a 2 x 2Tb mirror, and add another 2 x 2Tb mirror vdev, you'll have a 4Tb pool. If then replace 1 drive with a 4Tb unit, you'll still have a 4Tb pool. The extra 2Tb is wasted. If you then upgrade the other 2Tb drive in that vdev to a 4Tb drive, making a 2 x 4Tb & 2 x 2Tb pool, you'll then have 6Tb available (after expansion).
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Perfect.

I've ordered most of my components but an L2ARC which will probably end up a 1TB Samsung 970 EVO for $150, and Memory which I am waiting on a best offer from

I found a few 4TB disks, and I ordered some more 4TB NLSAS disks to round it out. I'll have three mirrors in the pool, and 2 spare on the shelf.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Here is where I have ended up. If you saw my other post you might have seen I've decided to keep my bulk media in a TrueNAS VM in ESXi. I'm passing through an LSI 9207-8i to the VM, I can give it as much RAM as it wants too

My goal is to keep half the drive bays spare. All through my life when it gets time to add storage, it's a pain. Right now I could buy all new drives, throw them on the other side and make a whole new array.

The second box helps save RAM on the main box, keeps my drive bays spare (Those drivebays were wasted before!) and gives me a place to replicate important data and complete local backups

I moved my bigger drive to Blue Iris, as I need some more storage in there

1626830048317.png


Still don't have a use for those Intel DC S3610 800GB SSD's though, and I'm not sure if I should put the 60GB boot drives in the rear 2.5" slots, or just jam them inside somewhere and keep those free for extra storage drives
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
BTW - One of the neat things about ZFS, it tracks the vdev member disks by a UUID. So you can power down, move drives around, and even between controllers, and it simply puts the pool back together on reboot/import.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
That's pretty handy.

With a RAIZ2 can I also swap one drive at a time and expand?

All components on their way now, can't wait to do the juggle or storage devices
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
With a RAIZ2 can I also swap one drive at a time and expand?

One all the devices have been exchanged and re-silvered you can expand even RAIDz2. It doesn't re-balance the data of course.

Generally what you can't do in ZFS is shift between RAID types. No RAID 1 -> RAIDz -> RAIDz2 migrations like a LSI/PERC controller.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
That sounds great. I think this will all work well

On my second system which will be in ESXi, it will have 35-40TB of storage, mainly bulk media but it will also hold replicated data from my main system (1.2ishTB)

Does an L2ARC Cache help with incoming replication data at all? I know its not a write cache, but I wasn't sure if it needed to ever read the replication data. The system will have 64GB of RAM, but I could up it to 128GB
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
That sounds great. I think this will all work well

On my second system which will be in ESXi, it will have 35-40TB of storage, mainly bulk media but it will also hold replicated data from my main system (1.2ishTB)

Does an L2ARC Cache help with incoming replication data at all? I know its not a write cache, but I wasn't sure if it needed to ever read the replication data. The system will have 64GB of RAM, but I could up it to 128GB

It impacts all reads. But understand RAM is the "L1ARC". Always max out your RAM before reaching for an L2ARC. RAM will always be faster than even a direct attach NVMe device. In marginal circumstances (like my little 16Gb home NAS) adding an L2ARC will actually consume RAM to keep the L2 organized and possibly make things worse! Beyond that I'll refer you to Brendan's original Blog post:

 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Thanks!

I have most of this stuff, ready to setup the secondary box while I wait for more packages

I have not kept up with the 4k vs 512 sector debate. Are most drives now 4k sectors? Is this a concern going forward? I read something about not being able to mix them in a pool
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
I have not kept up with the 4k vs 512 sector debate. Are most drives now 4k sectors? Is this a concern going forward? I read something about not being able to mix them in a pool

To my knowledge that debate has been running for 10+ years and still isn't settled! :smile:
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Still waiting on some parts, but I changed some things around, got a lot more disk and built out my second TrueNAS box in ESXi

PSU's now the 500w ones after monitoring usage on my ESXi system with dual E5-2680 V4's and 256GB DDR4. CPU will now be E3-1270 v5

1627242919624.png


The performance of the ESXi VM TrueNAS is doing pretty good. 800-900MB/s read, 200-250MB/s sustained write. This is just for bulk media and will hold replicated snaps from the main box

Now I have so many 8TB disks I've thought about making the main box with 8TB disks, but I just don't need the storage. So I will work with the 4TB disks and then just swap as needed, otherwise I have zero use for them
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
I might end up adding another 4TB mirror if I want better performance since I have the drives spare

The 4TB SAS disks are all used and the same age, so I will try and mirror a SATA with a SAS disk, just to break up the chance of a double failure

One of the 8TB SAS disks already failed about 12 hours into copying data. Resilvered with no issue
 
Top