I'm building the following:
-Used HP ProLiant DL380e G8 SERVER 25SFF 2x 6 CORE E5-2430 2.2GHz 32GB P420 x1
--32GB RAM, 25 2.5" SFF bays in the front, twin PSU's.
--At first, I put the P420 RAID controller into HBA mode, then needed RAID elsewhere and replaced it with a spare HP H220 HBA I had laying around.
-Chelsio 110-1121-40 Quad Port 10GB PCIe Card HBA Full Profile x1
--The idea is to use 2 links, each link into separate physical switches in the same switch stack for the storage traffic. Stuff like management traffic can remain on the onboard 1GbE NIC's.
-Crucial MX500 1TB SSD 3D NAND SATA 6.0Gb/s 2.5" Internal SSD CT1000MX500SSD1 x7
--3-way mirror 'striped' (though I understand that FreeNAS/ZFS doesn't quite 'stripe' like a traditional RAID0 does) another 3-way mirror, plus 1 online spare; plus room for 6 more 3-way mirrors for future expansion.
---If I understand this correctly, the net result will be just under 2Tb (given overhead, formating loss, and the plain old fact that when the box says 1Tb, it generally formats to just under that...) with triple redundancy (capable of losing a minimum of 2 drives and a maximum of 4 out of 6 drive BEFORE the loss of the entire pool depending on which drives fail), and a hot spare in case a drive in the live pool fails. I also am of the belief that this pool will have the read IO's & 'bandwidth' of 2-to-6 drives (as any disk within a mirror can be read independently), write IO's & 'bandwidth' of 1-or-2 drives (not sure on that one since I'm not sure exactly how ZFS does writes across stripes, but a max of 2 since all disks in a mirror have to write equally in parallel), as opposed to say read & write IO of 1 disk and the 'bandwidth' of 5 disks in a RAIDZ1 pool configuration.
-Vaseky M.2 2280 SATA 60GB SSD MLC Internal Solid State Drive (SSD) for Desktop Notebook Standrad M.2 SATA 60GB SSD MLC Storage Grain x2
--Relatively cheap, low capacity mirrored SSD's for boot purposes.
---I'm using a StarTech.com PEXM2SAT32N1 M.2 Adapter - 3 Port - 1 x PCIe (NVMe) M.2 - 2 x SATA III M.2 - SSD PCIE M.2 Adapter - M2 SSD - PCI Express SSD adapter to house both M.2 SATA SSD's into a single PCI-E slot. These are SATA drives though, and thus connected to motherboard's available 2 conventional SATA ports configured for AHCI mode (as opposed to RAID or legacy).
-MyDigitalSSD 240GB (256GB) BPX 80mm (2280) M.2 PCI Express 3.0 x4 (PCIe Gen3 x4) NVMe MLC SSD x1
--This is an old, left over part I have. My orginal intention was to use a WD Black NVMe M.2 2280 250GB PCI-Express 3.0 x4 3D NAND Internal Solid State Drive (SSD) WDS250G2X0C that I bought specifically for this build, but it caused a kernel panic and system reboot every time I tried to add this NVMe device to a pool. A few crashes later, I swapped the WD for the MyDigitalSSD, and now the pool successfully builds and doesn't crash the server.
--This NVMe is in the M.2 PCI-E socket of the above mention StarTech adapter. This is how I got 3 M.2 devices into one PCI bay thus leaving as much room for disk pool drives as I could (w/o swapping out some of the server chassis rear-facing bays/slots).
The overall objective is to build a highly fault tolerant server (on the cheap, this is just under $2k so far) to power a couple of ESXi hosts, or at least supplement our existing FreeNAS (mechanical 3.5" drives) server which already powering these hosts. Right now, the ESXi hosts boot from small 10Gb iSCSI LUNs housed on the previous FreeNAS server; 1 LUN dedicated to each host. All the hosts also share a (relatively) big (multi Terabyte) iSCSI LUN which houses the VM's and respective data. This setup has worked well for me thus far but maybe there's room for improvement on this go-around. One suggestion I recently got is to use NFS for the big shared LUN instead of iSCSI.
I'm on the fence on how I should use the NVMe drive, or if I should use it at all. My initial thought was to use it as a dedicated SLOG device. I understand that SLOG devices only really help in synchronous (as opposed to asynchronous) write operations, and that NFS & VMWare relies heavily (if not entirely) on such synchronous writes. What's I'm not sure is if that applies or not to VMWare over iSCSI. Regardless of if a SLOG device will be helpful in this case or not, the current consensus seems to be that SLOG devices do not need to be mirrored and present very little risk should they fail.
Ideas? Suggestions? Corrections? Advice? I'm going to assume the first one is "More RAM( when you can afford it)!"...
Thanks
-Sam M.
-Used HP ProLiant DL380e G8 SERVER 25SFF 2x 6 CORE E5-2430 2.2GHz 32GB P420 x1
--32GB RAM, 25 2.5" SFF bays in the front, twin PSU's.
--At first, I put the P420 RAID controller into HBA mode, then needed RAID elsewhere and replaced it with a spare HP H220 HBA I had laying around.
-Chelsio 110-1121-40 Quad Port 10GB PCIe Card HBA Full Profile x1
--The idea is to use 2 links, each link into separate physical switches in the same switch stack for the storage traffic. Stuff like management traffic can remain on the onboard 1GbE NIC's.
-Crucial MX500 1TB SSD 3D NAND SATA 6.0Gb/s 2.5" Internal SSD CT1000MX500SSD1 x7
--3-way mirror 'striped' (though I understand that FreeNAS/ZFS doesn't quite 'stripe' like a traditional RAID0 does) another 3-way mirror, plus 1 online spare; plus room for 6 more 3-way mirrors for future expansion.
---If I understand this correctly, the net result will be just under 2Tb (given overhead, formating loss, and the plain old fact that when the box says 1Tb, it generally formats to just under that...) with triple redundancy (capable of losing a minimum of 2 drives and a maximum of 4 out of 6 drive BEFORE the loss of the entire pool depending on which drives fail), and a hot spare in case a drive in the live pool fails. I also am of the belief that this pool will have the read IO's & 'bandwidth' of 2-to-6 drives (as any disk within a mirror can be read independently), write IO's & 'bandwidth' of 1-or-2 drives (not sure on that one since I'm not sure exactly how ZFS does writes across stripes, but a max of 2 since all disks in a mirror have to write equally in parallel), as opposed to say read & write IO of 1 disk and the 'bandwidth' of 5 disks in a RAIDZ1 pool configuration.
-Vaseky M.2 2280 SATA 60GB SSD MLC Internal Solid State Drive (SSD) for Desktop Notebook Standrad M.2 SATA 60GB SSD MLC Storage Grain x2
--Relatively cheap, low capacity mirrored SSD's for boot purposes.
---I'm using a StarTech.com PEXM2SAT32N1 M.2 Adapter - 3 Port - 1 x PCIe (NVMe) M.2 - 2 x SATA III M.2 - SSD PCIE M.2 Adapter - M2 SSD - PCI Express SSD adapter to house both M.2 SATA SSD's into a single PCI-E slot. These are SATA drives though, and thus connected to motherboard's available 2 conventional SATA ports configured for AHCI mode (as opposed to RAID or legacy).
-MyDigitalSSD 240GB (256GB) BPX 80mm (2280) M.2 PCI Express 3.0 x4 (PCIe Gen3 x4) NVMe MLC SSD x1
--This is an old, left over part I have. My orginal intention was to use a WD Black NVMe M.2 2280 250GB PCI-Express 3.0 x4 3D NAND Internal Solid State Drive (SSD) WDS250G2X0C that I bought specifically for this build, but it caused a kernel panic and system reboot every time I tried to add this NVMe device to a pool. A few crashes later, I swapped the WD for the MyDigitalSSD, and now the pool successfully builds and doesn't crash the server.
--This NVMe is in the M.2 PCI-E socket of the above mention StarTech adapter. This is how I got 3 M.2 devices into one PCI bay thus leaving as much room for disk pool drives as I could (w/o swapping out some of the server chassis rear-facing bays/slots).
The overall objective is to build a highly fault tolerant server (on the cheap, this is just under $2k so far) to power a couple of ESXi hosts, or at least supplement our existing FreeNAS (mechanical 3.5" drives) server which already powering these hosts. Right now, the ESXi hosts boot from small 10Gb iSCSI LUNs housed on the previous FreeNAS server; 1 LUN dedicated to each host. All the hosts also share a (relatively) big (multi Terabyte) iSCSI LUN which houses the VM's and respective data. This setup has worked well for me thus far but maybe there's room for improvement on this go-around. One suggestion I recently got is to use NFS for the big shared LUN instead of iSCSI.
I'm on the fence on how I should use the NVMe drive, or if I should use it at all. My initial thought was to use it as a dedicated SLOG device. I understand that SLOG devices only really help in synchronous (as opposed to asynchronous) write operations, and that NFS & VMWare relies heavily (if not entirely) on such synchronous writes. What's I'm not sure is if that applies or not to VMWare over iSCSI. Regardless of if a SLOG device will be helpful in this case or not, the current consensus seems to be that SLOG devices do not need to be mirrored and present very little risk should they fail.
Ideas? Suggestions? Corrections? Advice? I'm going to assume the first one is "More RAM( when you can afford it)!"...
Thanks
-Sam M.