BUILD Repurpose Xeon D Server for ESXi Shared Storage - Yay or Nay?

Status
Not open for further replies.

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
MODS: Somehow I missed the "Will it FreeNAS" section so feel free to move this thread there.


I'm looking to setup a storage server to present to my home VMware vSphere cluster using a current server I have that's not really being utilized. I've read up on FreeNAS many times and am about to go on a deep dive but before I do so I wanted to get some feedback on how well the following system would perform as strictly a SAN/shared storage server for a vSphere cluster. This server will do NOTHING but present storage, iSCSI or NFS (depending on which performs better), to vSphere and will not run any additional applications (I have VMs/dockers for that elsewhere).

Hardware is as follows:
  • SuperMicro X10SDV-2C-7TP4F (Xeon D-1508 w/onboard LSI2116, Intel X552 SFP+)
  • 16GB (8GB x 2) DDR4 2133 ECC RDIMMs
  • 4 x 400GB Hitachi HUSSL4040ASS600 SAS SSDs
  • 4 x 800GB Intel S3500 SATA SSDs

I'm thinking I can stick both sets of 4 SSDs into 2 different RAID10 vdevs and have a "top tier" datastore (Hitachi's) for my high priority VMs and a "second tier" datastore for my less important VMs. This server will be connected to my vSphere cluster through a Cisco SG350XG-24F switch via DACs (though upgrading to optics shortly).

So before I get real deep into the research, I'm looking for some validation from some FreeNAS pro's out there that first off all my hardware is supported, and secondly that I don't have any major bottlenecks here.

Appreciate the feedback.
 
Last edited:

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
For the most part, this sounds like a reasonable build. I'd up the memory a bit if you can, get another SSD or NVMe drive that has power loss protection and configure it as a SLOG device so ZIL writes aren't going to your pool disks. This may not be totally necessary with an all SSD array, but I'd still do it. A zVol shared over iSCSI will offer the most features when used with ESXi and FreeNAS as it's the only protocol to offer VAAI support (don't forget to set the "sync" zfs property on the dataset/zVol you use to sync=always). I'm not really sure what you were saying about your different tiers for the SSD drives. You'd need to create two different pools if you want any type of tier setup. Other than that I think you'll be ok.

Edit: You'll also need to update your LSI 2116 firmware to P20_IT_Firmware here.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
For the most part, this sounds like a reasonable build. I'd up the memory a bit if you can, get another SSD or NVMe drive that has power loss protection and configure it as a SLOG device so ZIL writes aren't going to your pool disks. This may not be totally necessary with an all SSD array, but I'd still do it. A zVol shared over iSCSI will offer the most features when used with ESXi and FreeNAS as it's the only protocol to offer VAAI support (don't forget to set the "sync" zfs property on the dataset/zVol you use to sync=always). I'm not really sure what you were saying about your different tiers for the SSD drives. You'd need to create two different pools if you want any type of tier setup. Other than that I think you'll be ok.

Edit: You'll also need to update your LSI 2116 firmware to P20_IT_Firmware here.

Thanks for the feedback. So 16GB isn't enough for this small of a zpool? Do you really think I'll see a benefit putting a SLOG device in front of my SSDs?

As for the two sets of SSDs, I was saying I'd like to use two different datasets for my VM datastores. The Hitachis have a much higher write endurance (38PB) so I'd like to use those for my high IO VM's. I'm not sure the best way to go about that which is why my verbiage is off I'm sure.

Thanks for the link to the firmware update I'll be sure to do that.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You would need to use two different pools in order to have one set of writes go to one set of drives. There would be different datasets on each pool too
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
The more RAM the better. 16GB is fairly low. If you could at least double it to 32GB that would be a lot better. It just depends on what type of load you expect to run.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
You would need to use two different pools in order to have one set of writes go to one set of drives. There would be different datasets on each pool too

I see. Does this go against best practices?


The more RAM the better. 16GB is fairly low. If you could at least double it to 32GB that would be a lot better. It just depends on what type of load you expect to run.

I don't expect a large load but if I feel 16GB isn't cutting it I will double it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I see. Does this go against best practices?

Well, its the best practise if what you want is two different pools with different io loads.

Also, each pool will have half the iops/performance that it could have if they were combined. It really comes down to how much IO are you expecting from the high io pool
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The more RAM the better. 16GB is fairly low. If you could at least double it to 32GB that would be a lot better. It just depends on what type of load you expect to run.

iSCSI adds a significant extra RAM load, thus 16GB should be considered the minimum for iSCSI vs the regular 8GB), hence the recommendation to consider 32GB.

You probably don't want to be using the bare minimum on this system.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I don't expect a large load but if I feel 16GB isn't cutting it I will double it.
The FreeNAS 9.3 documentation says "If you plan to use iSCSI, install at least 16GB of RAM, if performance is not critical, or at least 32GB of RAM if performance is a requirement." (emphasis added).
 
Status
Not open for further replies.
Top