scottrus71
Dabbler
- Joined
- Aug 16, 2023
- Messages
- 17
Hello all. I'm looking for feedback on my first TrueNAS Core build. My overall plan is to setup Proxmox Cluster on 3 x Intel NUCs and then turn my existing Supermicro bare-bones into a TrueNAS Core box. From TrueNAS I want to do file services (NFS, SMB) for VMs running on the Proxmox Cluster. Also, I want to use iSCSI from TrueNAS to back the VMs on the Proxmox Cluster, instead of using the local storage. I expect the VMs to be responsive and the file storage to be reliable with fault tolerance of multiple drive loss.
Currently I am running a Ubuntu 18.04 LTS build as follows:
My upgrade (purchase) plans for the existing Supermicro bare-bones that will run TrueNAS are the following 6 components:
I have concerns about attempting to use the WD RED NAS SSD drives in an iSCSI pool for VMs as they are not rated for that work load. Ideally I could buy Enterprise class SSDs but there is a high cost associated with doing that. For the initial number of VMs I plan to run about 8TB would be plenty. With a 6GB/s SATA3/SAS2 backplane, I'm assuming the mirror of SDDs will still have better performance than spinning disks here. I'm open to recommendations on FAST stroage to back VMs and low cost.
I also have want to ensure the Xeon Silver is the right choice here for TrueNAS Core. I initially considered the Xeon Bronze 3200, but that comes with both slower memory (2166Mhz vs 2400Mhz for Silver CPUs) and lacks cores and hyper threading. I definitely do not want to go above 85w TDP for the CPU as there is enough heat in the case already from the hard drives. I have no plans to go TrueNAS Scale and I want to be content with this hardware for the next several years.
Lastly, the idea of doing iSCSI from the Proxmox Cluster of NUCs is a bit concerning. I'm going to ignore that I don't have redundant links to do multi-path iSCSI since I don't have redundant switches anyway, a home use compromise. The concern is about resource contention running both user data and iSCSI over the same network port. I could pick up some USB-C 1GBe adapters to use with the NUCs if needed?
Thanks for taking the time to read and reply. This forum has been a fantastic way to research builds and configurations as I try my first TrueNAS and ZFS build.
Currently I am running a Ubuntu 18.04 LTS build as follows:
- Supermicro CSE-846BE16-R920B bare bone case
- 4U Server Chassis 2x 920W SQ
- 24-Bay BPN-SAS2-846EL1 (SAS-2 6GB/s Backplane Expander)
- ASRock H370M Pro4
- Samsung SSD 850 EVO 500GB boot drive (M.2 2280)
- Intel i7-8700 CPU @ 3.20GHz
- 16GB RAM, Non-ECC
- LSI MegaRAID SAS 2108 with BBU
- 2 x WD Red SA500 NAS SATA SSD (WDS500G1R0A-68A4W0)
- 3 x WD 10TB Red Plus, CMR, 5400 RPM (WD101EFAX-68LDBN0)
- 12 x WD 10TB Red NAS, CMR, 5400 RPM (WD100EFAX-68LHPN0)
- RAID6 119TB Usable EXT4 - File Services, NFS, SMB, Local Storage
- RAID1 400GB Usable EXT4 - Unused
- Intel NUC 11PAHi50001
- Intel Core i5-1135G7
- 64GB DDR4-3200 Non-ECC
- 500GB SAMSUNG 980 PRO M.2 2280 (Boot, Local Storage)
- Update the aging ASRock H370M with server class hardware in the Supermicro case
- Make proper use of the 3 Intel NUCs that are not earning their keep
- Separate Storage from Compute for flexibility
- Stability and fault tolerance for file storage (NFS, SMB)
- Performance and fault tolerance for block storage (iSCSI)
My upgrade (purchase) plans for the existing Supermicro bare-bones that will run TrueNAS are the following 6 components:
- Super Micro X11SPL-F main board
- Xeon Silver 4210 13.75M Cache, 2.20 GHz (10 Cores, 20 Threads)
- 4 x 32GB ECC DIMM PC-19200 DDR4-2400 Rank 2 (128GB)
- Chelsio T520-CR for 10GB/s SFPs (Used)
- Supermicro SSD-DM016-SMCMVN1 16GB SATA DOM (boot drive)
- LSI 9300-8i HBA (IT mode, Used)
- File Pool (SMB / NFS) 3 x 5 disk RAID-Z2 (I have the disks see above)
- Block Pool (iSCSI for VMs) 2 x 2 SSD (I have the SSDs, see above)
I have concerns about attempting to use the WD RED NAS SSD drives in an iSCSI pool for VMs as they are not rated for that work load. Ideally I could buy Enterprise class SSDs but there is a high cost associated with doing that. For the initial number of VMs I plan to run about 8TB would be plenty. With a 6GB/s SATA3/SAS2 backplane, I'm assuming the mirror of SDDs will still have better performance than spinning disks here. I'm open to recommendations on FAST stroage to back VMs and low cost.
I also have want to ensure the Xeon Silver is the right choice here for TrueNAS Core. I initially considered the Xeon Bronze 3200, but that comes with both slower memory (2166Mhz vs 2400Mhz for Silver CPUs) and lacks cores and hyper threading. I definitely do not want to go above 85w TDP for the CPU as there is enough heat in the case already from the hard drives. I have no plans to go TrueNAS Scale and I want to be content with this hardware for the next several years.
Lastly, the idea of doing iSCSI from the Proxmox Cluster of NUCs is a bit concerning. I'm going to ignore that I don't have redundant links to do multi-path iSCSI since I don't have redundant switches anyway, a home use compromise. The concern is about resource contention running both user data and iSCSI over the same network port. I could pick up some USB-C 1GBe adapters to use with the NUCs if needed?
Thanks for taking the time to read and reply. This forum has been a fantastic way to research builds and configurations as I try my first TrueNAS and ZFS build.