Will it FreeNAS ? Storage Server for ESXi (NFS) and personal data

flx

Cadet
Joined
Jul 13, 2020
Messages
4
Hi everybody,

I´m about to make a new build for my homelab.
Right now i´ve got a "all in one " solution with FreeNAS on ESXi. This works fine for me, but i´d like to go a little bit more "advanced" now.

The hardware (or most of it) ist already here:
Intel Xeon E5 2620 V3
Asus Z10PA-D8
64GB DDR4 ECC RAM
4x Intel DC S3700 800 PCIe NVMe SSDs
4x 4TB WD Red
2x Samsung 850pro 256GB (for FreeNAS installation)
HP Ethernet 560-SFP+ 10G (should work so far, otherwise i will look for a used celsio)
Fujitsu Case with 470W redundant psu

My plan is to make an RAID10 with the four Intel ssds and share it with NFS to my ESXi host as vm storage.
The servers are connectet over 10G Ethernet.
From my understanding this would be the best solution (for performance) with the four ssds. But i´m not sure if maybe a different configuration with an SLOG device would be the better option.
Here i´ll be very happy for some advice/ inputs for the configuration.

The WD Reds are for personal data and the performance is not so important right now.

kind regards
Felix
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
I'm not sure a SLOG on another SSD is going to help unless it's a substantially faster interface. I'm having trouble finding an actual NVMe S3700, they're all SATA, did you mean P3700? But yes, you can get something faster, the law of diminishing returns applies. "If you want to eat Hippopotamus, you have to pay the freight."

If you divvy up the work into multiple datastores you might be able to turn off the O_SYNC behavior for the non-critical stuff and get a performance bump. Keep in mind the 10GbE will be your limiting factor... I would perhaps consider iSCSi over NFS.
 

flx

Cadet
Joined
Jul 13, 2020
Messages
4
Tank you, i think i am fine without an SLOG on the SSD Pool an just go for mirrored vdevs. Yes i mean the p3700s, seems i was a little bit confused.

I will have a look for the O_SYNC and read about it.
You mean throwing the 10G Card out and go for a Qlogic 16G HBA ?
 

flx

Cadet
Joined
Jul 13, 2020
Messages
4
Tank you, i think i am fine without an SLOG on the SSD Pool an just go for mirrored vdevs. Yes i mean the p3700s, seems i was a little bit confused.

I will have a look for the O_SYNC and read about it.
You mean throwing the 10G Card out and go for a Qlogic 16G HBA ?
I got a little bit into it and don´t want to turn of sync writes. So i have to go for iSCSI.
When i got it right i can still use my 10G Network card for the iSCI solution.

Then i will go for striped vdevs (like RAID10) with the four SSDs an read/write performance should not be limited by the NFS with ESXi - Sync Write bottleneck.
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
I got a little bit into it and don´t want to turn of sync writes. So i have to go for iSCSI.
When i got it right i can still use my 10G Network card for the iSCI solution.

Then i will go for striped vdevs (like RAID10) with the four SSDs an read/write performance should not be limited by the NFS with ESXi - Sync Write bottleneck.

You'll still have sync writes with iSCSI, but you'll have ESXi deciding when and where, and using their VAAI acceleration features (which is just a fancy way of VMware saying "we want to license SCSI commands that have been around forever and charge for them"...), vs. NFS which just defines O_SYNC on everything all the time in the RFC spec.

As for the pool configuration... I suspect you need to look at the numbers and do a transfer budget of sorts for each step. Those P3700's are rated for something like 2700MB/Sec read and 1080MB/Sec write, per device. A 10GbE network is going to top out somewhere around 700MB/Sec., maybe... So striping P3700's is going to put you out in Infiniband QDR territory, and you'll still have to consider all the under the hood specs that you can't do much about, number of PCIe lanes x GT/s, memory bandwidth, etc...
 
Top