New FreeNAS build. ESXi Datastore. Opinions?

Status
Not open for further replies.

Jaknell1011

Dabbler
Joined
Aug 28, 2013
Messages
49
I have a Dell Poweredge c6100 XS23-TY3 system. It is a 2u 4node system. I am using 3 of the 4 nodes as ESXi hosts and will be using node 4 as my FreeNAS box. I will be using this FreeNAS box as the datastore location for the 3 ESXi hosts. Each nodes specs are the same, with the exception of the storage on the FreeNAS node. The specs for each node are as follows:

(2) Intel Xeon L5520 2.27ghz Quad Core CPUs
32GB of RAM
(1) 16GB Sandisk USB stick for hypervisor and FreeNAS installation
(12) 1TB HDDs in RAIDz2 configuration (2x6HDD) (FreeNAS box only)
(1) IBM Serveraid M1015 SAS/SATA Controller (FreeNAS box only)

I am looking for opinions on how this system should perform acting as a NFS datastore for 3 ESXi hosts. I plan on using NFS over ISCSI as I am considering a backup solution and find it will be easier to backup through NFS vs backing up one large file extent. Each host will only be running a couple of server VMs (3 max to start).

Any configuration suggestions would be great. I am currently considering adding an SLOG device and am looking into one 32GB Intel X-25 SSDSA2SH032G1GN. Will adding this help boost performance of the ESXi setup.

Basically...
1.) How well wold you expect this to perform as an ESXi datastore?
2.) Should I add anything (SLOG, ZIL, etc.) and will I see a performance gain?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, performance will depend on MANY factors. I will tell you 32GB of RAM can quickly become "not be enough" if you have many VMs(How many is many? Depends on your VMs actual workload). Your L2ARC will be limited to a fairly small size because you have only 32GB of RAM.

Also, you may need to consider running multiple mirrored vdevs over a RAIDZ2 vdev.

The best I can say would be "put it together and test it out".
 

Jaknell1011

Dabbler
Joined
Aug 28, 2013
Messages
49
Do you think a SLOG device will help performance at all?

I am using this right now with a couple of test VMs and everything seems to run smoothly. I am looking to switch over to production with this and want to make sure I have all of my bases covered before I do.

Out the gate I will only be running 2 server VMs. One is our application server (2 SQL dbs, used infrequently, maybe 3 users making at most 1000 IOs per day) and the other is our "everything else" server that does print and file serving, is our primary DC, DNS and DHCP server. (It was here before I got here; I plan on balancing the workload by creating 5 or 6 VMs that each handle 1 role.)

These servers really don't get hit very hard throughout the day. We have about 100 users that use the network drives and printing throughout the day, but other than that and the usual DNS, DHCP, and AD traffic, things are pretty slow. (its a K-8 school)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Will it "help".. yes. Will it help enough that you aren't going to have major latency issues? Probably not. But you really won't know "how good" or "how bad" it will perform until you build it.
 

Jaknell1011

Dabbler
Joined
Aug 28, 2013
Messages
49
Will it "help".. yes. Will it help enough that you aren't going to have major latency issues? Probably not. But you really won't know "how good" or "how bad" it will perform until you build it.


Does this mean you think I WILL have major latency issues, or are you just making a point as to the uncertainty of the situation? I am truly intending to clarify and not to sound sarcastic or come off as a jerk.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
By far, most people have latency issues if they try to run more than 1 or 2 VMs. If you have 10 idle VMs you might have no problems at all. You could also have 2 busy VMs that have latency problems leading to lost data. It really depends on your pool's performance and how much loading you throw on it. I run a single VM via iSCSI on my server and it performs like crap. But, I use it for a few services so I don't care if it takes 5 minutes to bootup and 30 seconds for an application to load. I'd never even consider doing a second VM on my pool.

I have no clue what kind of loading you will have, but I'd say 90% or more of users that try to do VMs over ESXi have latency issues with their NFS shares. Nobody goes with ESXi just for 1 VM. You always want more and more. It's like a bad drug and you can't stop. The problem is most people realize they're at their limit for their server before they get anywhere close to the number of VMs they wanted to run, nevermind future expansion.

We've also had people that got it up and running and it worked out fine for weeks or month. But suddenly they started having problems and they don't know what to do but its crashing the VMs.

Personally, I'd never try to build an ESXi NFS datastore for more than 2 or 3 VMs without a FreeNAS box with at least 64GB of RAM and at least a 3vdev mirrored zpool, and a modest L2ARC and a ZIL. I don't like VMs crashing and you losing data. That's just not cool. Performance can always be increased by upping the RAM and adding more vdevs, larger L2ARC and larger ZIL(but only to a small extent.. SLOGs don't need that much space).
 
Status
Not open for further replies.
Top