TrueNAS scale build

DanReid21

Cadet
Joined
May 11, 2022
Messages
2
Hi All,

Despite not being a experienced user of TrueNAS, and after my first DIY backup target i've decided to go all in with IX Systems. I'm redesigning my primary datacenter around the TrueNAS solutions and have purchased several supported solutions from IX Systems.

TrueCommand Software Suite
Primary SAN 135TiB RAW All-NVMe F60 (arriving this week) (VDI/HPC/AI)
Backup Target 1 - 180TB RAW TrueNAS Mini-R (2x10GB SFP+)
Backup Target 2 (GEO) - 180TB RAW TrueNAS Mini-R (2x10GB SFP+)

I'm currently running on a AF5000 All Flash system with a HF20 Hybrid flash that will be relegated to my backup datacenter once the conversion is complete.

The concern: Space / Performance
1) I'm not sure that 80TiB (usuable capacity) of the NVMe storage will be enough to hold all of our data and
2) The TrueNAS mini-r would not allow me to complete a instant restore of my even a small portion of my VM workload in the event of catastrophic failure of the NVMe storage. While we could failover to the backup datacenter this presents it own challenges,

So with that in mind and after i blew my budget on the NMVe storage build. I decided to complete a 2nd DIY build... and why not go big or go home :)

Build 1) combine my plex build and backup build into a single server that should satisfy both performance and capacity.
Build 2) VM storage SAN

I then spent the next several month researching what others thought about various builds and then went ebay shopping :)
Purchased 3 servers (like a spare)
9x60 Bay SAS enclosures (redundant controller / power)

I then went to work and thought through the approach I would use to ensure data integrity and performance as well as much redundancy as is reasonable

The ask: Review my config and thoughts and challenge them with the goal of improving the overall design

TrueNAS Scale is the desired platform for all installs and servers are most likely 100% dedicated to the storage capabilities of the system. (maybe a jail of sonar/radar)

Server 1: Plex / Backup Target
Cisco USC c240 - Xeon Gold 6150 2x18 core 2.7Ghz(3.7Ghz Turbo) / 768GB RAM
2x240GB M.2 Boot drives (mirrored)
9x960GB SSD SATA Drives (Read Cache)
2x375 NMVe Intel optane SSD (Write Cache)
Storage
4 total SAS HBA's (see attached excel doc for connection overview)
Using 6x60 Bay Enclosers (360x6TB Seagate Exos 12Gb/s SAS)
(3 nodes per vDev - 6 drives ZFS2 - 2 in each node)
58 vDevs in Pool - 21.88TiB / vDev = 1,270 TiB pool
12 Hot Spare drives (116 Parity drives)
Target 1: 1000 TiB Plex Storage
Target 2: 270 TiB Backup Storage (for NVMe SAN / Server 2 (VMs)

Server 1:
Cisco USC c240 - Xeon Gold 6150 2x18 core 2.7Ghz(3.7Ghz Turbo) / 768GB RAM
2x240GB M.2 Boot drives (mirrored)
9x960GB SSD SATA Drives (Read Cache)
2x375 NMVe Intel optane SSD (Write Cache)
2 total SAS HBA's (see attached excel doc for connection overview)
Using 3x60 Bay Enclosers (180x6TB Seagate Exos 12Gb/s SAS)
(3 nodes per vDev - 6 drives ZFS2 - 2 in each node)
27 vDevs in Pool - 21.88TiB / vDev = 590 TiB pool
18 Hot Spare drives (58 Parity drives)
Target 1: 590 TiB - VM's

Reasons for current arhitexture:
1) A 360 drive SAN solution with ~8TB of read cache should be able to bring up all the resources online at a usable speed while primary SAN is fixed / replaced (Ok, not great)
2) vDEV placement so that if an entire node (60 drives) goes offline solution is still runs (note: as i write this i realize the hot spares may make that untrue though) - Question - should i contain the vDEVs into the expansion enclosure and not combine to a overall pool?
3) In server config 2, i'm not concerned about space as much as i am about performance and resiliency. Is there a better config (ZFS ? vDEV)
4) I have a 3rd server (identical config) but was thinking of using that server as a lab machine... better use here or overkill?
5) dedupe, just cause?

Thoughts? Challenges?

thanks in advance!

Dan Reid
 

Attachments

  • Storage configuration - 2 systems.xlsx
    21.9 KB · Views: 70
Top