BillyTheCreator
Cadet
- Joined
- Nov 16, 2017
- Messages
- 3
Hello all. I am new to FreeNAS, but not new to datacenters, NAS/SAN, etc.. I have been asked to configure a FreeNAS array for enterprise. (Our O&M team is more than capable of managing a FreeNAS array, so we felt it was better to go with a FreeNAS build over a TrueNAS build.)
Below is the breakdown of the NAS build as it currently stands.
The application:
There are four functions this array will have. As a central storage node, the array will: 1) serve VMs to multiple ESXi hosts on LAN, 2) store backups each of the ESXi hosts, 3) provide archival of the VMs (see 1), and 4) provide extended non-boot storage for the VMs as required. Each of these functions would correspond to, in non-ZFS speak, RAID10 volumes--with RAID10 being chosen to maximize IOPS and maximize array resiliency.
First question: are there any red flags with the array's application that make use of FreeNAS a non-starter? Assuming not...
Will the build FreeNAS?
Additional comments/questions:
1) The intent us to use the SATA DOM pair in RAID1 configuration as the FreeNAS boot volume.
2) We would either use the 2x M.2 ports with PLP-capable units or the 2x 2.5" SATA/SAS slots for L2ARC/ZIL. Given the size of the disk array and our application, does anyone have suggestions for drives to use? (I am not sure I can justify purchase of Intel P4800X Optane cards.)
3) Is exchange of the LSI 3108 controller to an LSI 3008 (by moving to a 6049-E1CR36L) recommended or required for FreeNAS to run?
4) As we'd like to get the maximize the usable life of this unit, we'd like the NAS to run Scalable processors. Is our current selection of a 2x 6C/12T 6128 an appropriate option for the build? Would we get more bang for our buck if we went with a different Scalable processor pair? Comments welcome.
5) Is the amount of RAM in the build about right, too much, or too little?
6) Are there any aspects of the build as it stands now that would make it incompatible with FreeNAS?
7) And the ultimate catch-all questions ... what am I missing or doing wrong, if anything?
Below is the breakdown of the NAS build as it currently stands.
Code:
Supermicro SuperStorage Server 6049PE1CR36H 36x SATA/SAS LSI 3108 12G SAS Dual 10Gigabit Ethernet 1200W Redundant Processor 2 x Intel® Xeon® Gold 6128 Processor 6core 3.40GHz 19.25MB Cache (115W) Memory 12 x 32GB PC421300 2666MHz DDR4 ECC Registered DIMM Boot Drive 2 x 128GB SATA 6.0Gb/s Disk on Module (MLC) (Vertical) Storage Drive 36 x 12.0TB SAS 3.0 12.0Gb/s 7200RPM 3.5" Seagate Enterprise Capacity v7 (Helium) (512e) Controller Card Supermicro AOCS3108LH8IR SAS 3.0 12Gb/s 8port RAID Controller with 2GB Cache Battery Backup Supermicro CacheVault Module for LSI 3108 12Gb/s SAS Controller Network Card 4 x Intel® 10Gigabit Ethernet Converged Network Adapter X710DA2 (2x SFP+) Chassis Bezels Supermicro System Cabinet Front Bezel MCP210846010B (Black) Tech Specs Memory Technology DDR4 ECC Reg Chipset Intel C624 Form Factor 4U Color Black Memory Slots 16x 288pin DIMM Sockets Graphics ASPEED AST2500 BMC Ethernet Intel® X557 10GBaseT Ethernet Controller Power 1200W Redundant Power Supplies with PMBus External Bays 36x Hotswap 3.5" SAS3/SATA3 drive bays 2x rear Hotswap 2.5" SATA3 drive bays 2x Optional Onboard NVMe M.2 M.2 2x Optional Onboard NVMe M.2 Expansion Slots 3x PCIE 3.0 x16 slot 4x PCIE 3.0 x8 slots Processor Product Line Xeon Scalable Socket LGA3647 Clock Speed 3.40 GHz Cores/Threads 6C / 12T Virtualization Yes Hyperthreading Yes Wattage 115W Memory Technology DDR4 Type 288pin DIMM Capacity 12 x 32 GB Speed 2666 MHz Error Checking ECC Signal Processing Registered Boot Drive Storage Capacity 2 x 128GB Storage Drive Interface 6.0Gb/s Serial ATA Storage Capacity 36 x 12.0TB Interface 12.0Gb/s SAS Rotational Speed 7200RPM Cache 256MB Format 512e Controller Card Product Type SAS RAID Controller Data Transfer Rate 12Gb/s SAS Internal Ports 8 Ports External Ports 0 Ports I/O Processor LSI SAS3108 (ROC) Cache Memory 2GB DDR3 RAID Levels 0, 1, 5, 6, 10, 50, 60 Max Devices 240 Network Card Speed 10Gb Ethernet Connector SFP+ Interface PCI Express 3.0 x8 Cable Medium Copper VT for Connectivity (VTc) Yes VT for Directed I/O (VTd) Yes
The application:
There are four functions this array will have. As a central storage node, the array will: 1) serve VMs to multiple ESXi hosts on LAN, 2) store backups each of the ESXi hosts, 3) provide archival of the VMs (see 1), and 4) provide extended non-boot storage for the VMs as required. Each of these functions would correspond to, in non-ZFS speak, RAID10 volumes--with RAID10 being chosen to maximize IOPS and maximize array resiliency.
First question: are there any red flags with the array's application that make use of FreeNAS a non-starter? Assuming not...
Will the build FreeNAS?
Additional comments/questions:
1) The intent us to use the SATA DOM pair in RAID1 configuration as the FreeNAS boot volume.
2) We would either use the 2x M.2 ports with PLP-capable units or the 2x 2.5" SATA/SAS slots for L2ARC/ZIL. Given the size of the disk array and our application, does anyone have suggestions for drives to use? (I am not sure I can justify purchase of Intel P4800X Optane cards.)
3) Is exchange of the LSI 3108 controller to an LSI 3008 (by moving to a 6049-E1CR36L) recommended or required for FreeNAS to run?
4) As we'd like to get the maximize the usable life of this unit, we'd like the NAS to run Scalable processors. Is our current selection of a 2x 6C/12T 6128 an appropriate option for the build? Would we get more bang for our buck if we went with a different Scalable processor pair? Comments welcome.
5) Is the amount of RAM in the build about right, too much, or too little?
6) Are there any aspects of the build as it stands now that would make it incompatible with FreeNAS?
7) And the ultimate catch-all questions ... what am I missing or doing wrong, if anything?