New FreeNAS build - ESXi storage

Status
Not open for further replies.

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
I am planning the following purchasing - budget is between $2500-$3500


Main Chassis and components via Ebay:

https://www.ebay.com/itm/SUPERMICRO...210-8I-JBOD-/172518422871?hash=item282ae63957


Add 12 x 2TB hard drives to start - either SATA or SAS - probably HGST drives purchased from Amazon


SLOG: Intel p900 SSD SATA drive - or the PCI variant

https://www.amazon.com/Intel-Optane...TF8&qid=1521481720&sr=8-1&keywords=intel+900p




The Main Chassis and components of the ebay listing are:

SUPERMICRO 4U 846BA-R920B


Motherboard - X9DRI-LN4F+

Chassis - 846BA-R920B

CPU's - DUAL INTEL XEON PROCESSOR E5-2680 EIGHT CORE 20M CACHE 2.70GHZ

Memory - 192GB MEMORY (24X 8GB) DDR-3 ECC REG

HBA's - 3X LSI 9210-8I

Backplane - SAS846A

Power supplies - DUAL PWS-920P-SQ

Boot disks: dual 32gb Disk On Modules - mirrored

SLOG - Intel 900p Sata - mounted internally


So this all comes in a bit over $3100.00, which is about the midpoint of my budget, the goals are primarily ESXi datastore storage, open drive bays allow for later expansion with larger drives. My initial goals are testing of FreeNAS is a limited production environment to see how it performs for VM storage performance.


I realize that I can have a better SLOG drive, but only at a fairly higher price point.


The system will be in a controlled environment, noise is not an issue.


I have done a fair amount of research about whether to use a alternate backplane that will allow for expansion of a single HBA to 24 drives or the use of the 3 LSI cards that seems to be the alternative method. If someone can discuss the pro's and con's of this I am open to learning.


From previous posts on other builds, I have chosen this system for the following reasons... Maxed memory, highest speed CPU given budget constraints for a test system, SLOG device also based on budget constraints vs a higher end Intel Optane p3700


If anyone ( Chris ) sees any major glaring issues with this build please let me know, if you have suggestions how to make it better within similar budget I am all ears.


I plan on purchasing this week so thank you for your quick replies and assistance. This forum and its members and posts have been of invaluable help.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I can't comment on anything until I know more about he workload. I know you said VMs but what kind of VMs, how many, how many users will the VMs support, how many hosts will connect? I see the board in that server has quad 1gb NICs are you ok with a max VM disk speed of 120MBps on a max of four VM disks or less shared across more? I will say the dual octa-cores could be overkill but why not?

I suspect you intended to add a few Chelso 10gb NICs to that build ...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I can't comment on anything until I know more about he workload. I know you said VMs but what kind of VMs, how many, how many users will the VMs support, how many hosts will connect? I see the board in that server has quad 1gb NICs are you ok with a max VM disk speed of 120MBps on a max of four VM disks or less shared across more? I will say the dual octa-cores could be overkill but why not?

I suspect you intended to add a few Chelso 10gb NICs to that build ...
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Thank you kdragon75, your question regaurding workload is valid... the work load would be approximatly 15 VM's across 3 hosts... various linux / windows, AD, SQL ( fairly small )

I will be adding 10GB probably in SFP+ format, this system is to test a proof of concept before I spend real money 12k-15k on a true production systems. This system will allow me to be comfortable working with FreeNAS and doing some basic performance measuring and tweaking to know how to use FreeNAS better.

Thank you for your assistance.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Do you need CIFS, NFS, or anything other than ZFS based block storage? If not, take a look at Enterprise Storage OS. Its super slim no webUI no BS and only offers block storage via FC, iSCSI, infiniband, and a few other more exotic options. The big catch is that to get ZFS you need to compile it yourself (unless you get a support contract and for production you will want that anyway). Its easy to compile and the system is ROCK SOLID they are also much more focused as they dont do the whole "Lets add features for everyone and every use case!" The only bummer is that it's not FreeBSD its linux. I have nothing against linux I just prefer BSD. I have used it in my homelab for some time and it always works.
 
Status
Not open for further replies.
Top