New TrueNAS Core Build

scottrus71

Dabbler
Joined
Aug 16, 2023
Messages
17
Hello all. I'm looking for feedback on my first TrueNAS Core build. My overall plan is to setup Proxmox Cluster on 3 x Intel NUCs and then turn my existing Supermicro bare-bones into a TrueNAS Core box. From TrueNAS I want to do file services (NFS, SMB) for VMs running on the Proxmox Cluster. Also, I want to use iSCSI from TrueNAS to back the VMs on the Proxmox Cluster, instead of using the local storage. I expect the VMs to be responsive and the file storage to be reliable with fault tolerance of multiple drive loss.

Currently I am running a Ubuntu 18.04 LTS build as follows:
  • Supermicro CSE-846BE16-R920B bare bone case
    • 4U Server Chassis 2x 920W SQ
    • 24-Bay BPN-SAS2-846EL1 (SAS-2 6GB/s Backplane Expander)
  • ASRock H370M Pro4
    • Samsung SSD 850 EVO 500GB boot drive (M.2 2280)
    • Intel i7-8700 CPU @ 3.20GHz
    • 16GB RAM, Non-ECC
  • LSI MegaRAID SAS 2108 with BBU
    • 2 x WD Red SA500 NAS SATA SSD (WDS500G1R0A-68A4W0)
    • 3 x WD 10TB Red Plus, CMR, 5400 RPM (WD101EFAX-68LDBN0)
    • 12 x WD 10TB Red NAS, CMR, 5400 RPM (WD100EFAX-68LHPN0)
    • RAID6 119TB Usable EXT4 - File Services, NFS, SMB, Local Storage
    • RAID1 400GB Usable EXT4 - Unused
I also have 3 of the following identical Intel NUCs that are sadly left idle most of the time
  • Intel NUC 11PAHi50001
    • Intel Core i5-1135G7
    • 64GB DDR4-3200 Non-ECC
    • 500GB SAMSUNG 980 PRO M.2 2280 (Boot, Local Storage)
My goals are to
  1. Update the aging ASRock H370M with server class hardware in the Supermicro case
  2. Make proper use of the 3 Intel NUCs that are not earning their keep
  3. Separate Storage from Compute for flexibility
  4. Stability and fault tolerance for file storage (NFS, SMB)
  5. Performance and fault tolerance for block storage (iSCSI)

My upgrade (purchase) plans for the existing Supermicro bare-bones that will run TrueNAS are the following 6 components:
  1. Super Micro X11SPL-F main board
  2. Xeon Silver 4210 13.75M Cache, 2.20 GHz (10 Cores, 20 Threads)
  3. 4 x 32GB ECC DIMM PC-19200 DDR4-2400 Rank 2 (128GB)
  4. Chelsio T520-CR for 10GB/s SFPs (Used)
  5. Supermicro SSD-DM016-SMCMVN1 16GB SATA DOM (boot drive)
  6. LSI 9300-8i HBA (IT mode, Used)
    • File Pool (SMB / NFS) 3 x 5 disk RAID-Z2 (I have the disks see above)
    • Block Pool (iSCSI for VMs) 2 x 2 SSD (I have the SSDs, see above)
Aside from general hardware review by members of the forum, I have the following concerns.

I have concerns about attempting to use the WD RED NAS SSD drives in an iSCSI pool for VMs as they are not rated for that work load. Ideally I could buy Enterprise class SSDs but there is a high cost associated with doing that. For the initial number of VMs I plan to run about 8TB would be plenty. With a 6GB/s SATA3/SAS2 backplane, I'm assuming the mirror of SDDs will still have better performance than spinning disks here. I'm open to recommendations on FAST stroage to back VMs and low cost.

I also have want to ensure the Xeon Silver is the right choice here for TrueNAS Core. I initially considered the Xeon Bronze 3200, but that comes with both slower memory (2166Mhz vs 2400Mhz for Silver CPUs) and lacks cores and hyper threading. I definitely do not want to go above 85w TDP for the CPU as there is enough heat in the case already from the hard drives. I have no plans to go TrueNAS Scale and I want to be content with this hardware for the next several years.

Lastly, the idea of doing iSCSI from the Proxmox Cluster of NUCs is a bit concerning. I'm going to ignore that I don't have redundant links to do multi-path iSCSI since I don't have redundant switches anyway, a home use compromise. The concern is about resource contention running both user data and iSCSI over the same network port. I could pick up some USB-C 1GBe adapters to use with the NUCs if needed?

Thanks for taking the time to read and reply. This forum has been a fantastic way to research builds and configurations as I try my first TrueNAS and ZFS build.
 

scottrus71

Dabbler
Joined
Aug 16, 2023
Messages
17
As things go, plans changed a bit.

  1. Changed out the LSI 9300 for a LSI 9207
  2. Picked up a AOC-SHG3-4M2P with 2 x 2TB Ironwolf 525 m.2 NAS NVMe
  3. Added one more WD Red Pro 10TB drive for a total of 16 drives.

Assuming the NVMe and the AOC-SHG3-4M2P work together in the X11SPL-F (it’s not on the AOC list) that will become a mirrored VDEV for my VMs served up over ISCSI to Proxmox. I’ll move the WD RED SA500s into service elsewhere in my home lab Instead of using them in the Turenas system.

I now have 16 x 10TB drives instead of 15 drives. Previously I was going to do two 6 x 10TB VDEV RAID Z2 with 3 drives left over for future use.. However, I’ll now do two 8 x 10TB VDEV RAID Z2. This will cleanly allow me to fill all 24 bays in the future by adding another 8 x 10TB VDEV. For a total of 3 VDEVS backing the file storage pool. (Primarily media streaming)

Other than that, all parts are here and I’m ready to build. After I setup the system and do some initial testing,
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
LSI 9300-8i HBA (IT mode, Used)
Perhaps it is only a wording issue. But "IT mode" is not sufficient, you need IT firmware. That is a subtle but crucial difference.
 

scottrus71

Dabbler
Joined
Aug 16, 2023
Messages
17
Is there a specific firmware and flash utility I should use? Any past links would be awesome. This would be for the LSI 9207-8i.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Perhaps it is only a wording issue. But "IT mode" is not sufficient, you need IT firmware. That is a subtle but crucial difference.
I don't think there's a distinction, rather some people mistakenly lump other things along with IT mode, from IR (which is fine, but moderately slower) to mrsas direct-attach disks (which are not really well-tested).
Is there a specific firmware
P20.00.07 IT mode (give or take a zero or two).
flash utility I should use
sas2flash - it's included with TrueNAS for convenience, but don't run it on a controller that's backing an online pool.
 

scottrus71

Dabbler
Joined
Aug 16, 2023
Messages
17
Well. After a few days of building the box is up and running. My biggest issue is NFS performance right now. I’m still doing troubleshooting though and will test with iperf next to verify I’m getting Gbe speeds between client and NAS.
 
Top