ESXi, ZFS performance with iSCSI and NFS

Status
Not open for further replies.

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Well, I think if you wanted to go ZIL for your SLOG, then a DC3700 would have been a better choice. Although the OWC PCIe SSD is very intriguing.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Well, I think if you wanted to go ZIL for your SLOG, then a DC3700 would have been a better choice. Although the OWC PCIe SSD is very intriguing.

Agreed. But I was buying a boot drive at the time. I've used the S3700s in other projects and they are very very nice SSD drives esp for work loads that write a lot, PCIe is probably the only way to move up from them in a big way. For a SLOG only device I'd love to benchmark a STEC 840Z, it's a 16GB SAS SSD purpose built ZIL device. Paper spec wise it looks to be a step down from a ZuesRAM device though, of course it's cheaper so you'd expect it to be so. Problem with all the paper specs is none of them use block sizes close to what ZFS uses when it writes out data to the ZIL.

On more thing to chew on:

Mirror config(12 arrays of 2 disks):
STEC Only in JBOD unit: 287MB/s
Intel Only: 134 MB/s
sync=disabled(for fun only): 642MB/s
local(async): 399MB/s

This is the exact same hardware and config setup I bench marked above, the only change is I'm using a Ubuntu based setup(this pains me greatly to write but at the end of the day I need stability & throughput for my VMs). If I go down this road it will be mid summer at the earliest before I would move this from testing and into production. My gut reaction is that they've tuned ZFS differently and/or are writing to my STEC in a move optimized fashion.

As a final note I've setup and bench marked just about every ZFS appliance in the past week as a virtual SAN and I'll say all the Solaris based ones performed inline with FN. In fact total package/experience wise FN kicked the tar out of the Solaris based ones. FN was the only appliance that didn't include drivers for vmxnet3 which I found interesting.

Edit:
Added stats for using the Intel only above. Going to try a firmware upgrade on the STEC and then retest under FN.
 
Last edited:

rdrcrmatt

Dabbler
Joined
Jul 14, 2014
Messages
10
8 vCPUs on a VM? I hope it is the only VM on that ESXi host or that host does have more cores allocated than it has physically.

Or I hope I read that wrong.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
8 vCPUs on a VM? I hope it is the only VM on that ESXi host or that host does have more cores allocated than it has physically.

Or I hope I read that wrong.

No you read it correctly(I do have 32 logical cores to spread around so 8 isn't going to kill me). While I'm slightly over committed on CPUs across all my VMs, it's very rare that they spike at the same time, I average less then 10% CPU utilization on the physical box during normal operations. But when I one of VMs needs some CPU headroom it's available, for the SAN it's typically when I'm doing a manual replication of one of my big data sets.
 

rdrcrmatt

Dabbler
Joined
Jul 14, 2014
Messages
10
That's not exactly how VMWare works.

Over allocating CPU is one of the #1 performance killers. In order for an 8 vCPU VM to get physical processor time the hypervisor needs to have 8 cores available (idle) in order to process instructions. All 8. If you really want to keep your vCPUs that high, keep an eye on your CPU Ready and Co-Stop CSTP values and evaluate them against VMware best practices for those values.

Or, in short, to "set it and forget it" keep your VMs to 1-2 vCPUs unless you are CERTAIN you need the extra cores. I won't add vCPU to a VM in my environments unless the VM is CPU constrained based on historical performance numbers. It is rare that I'll let a 4 vCPU VM into the wild.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I won't add vCPU to a VM in my environments unless the VM is CPU constrained based on historical performance numbers.

Well, that's kind of a silly thing. Sometimes you *know* a VM is going to chew all the CPU you can throw at it. In which case us experienced admins simply make an intelligent guess; I did that just this morning and built aVM with 8 cores which is now running the system between 91-97% busy. Fun to watch it start burning about 100 extra watts. It's great to be careful and cautious about how you allocate CPU resources, but there's such a thing as needlessly paranoid.
 

rdrcrmatt

Dabbler
Joined
Jul 14, 2014
Messages
10
I have that policy because everyone "needs" 8vCPU. Systems I know that will actually use the horsepower get it.

Perhaps I was a little conservative there. but adding vCPU when it isn't needed can be a huge performance impairment. I can relate several instances I experienced myself where lowering the number of vCPUs that were allocated to a VM made it perform better.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, but, again, a one-rule-fits-all sort of strategy is not going to lead you to a best-case outcome. Certainly adding vCPU without rationality is a path to madness, but so is NOT adding vCPU without rationality.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
He (rdrcrmatt) did say he had no problem with giving more vCPUs if he's confident it will need it, and to keep an eye on %RDY/%CSTP values. I don't think any sane person is legitimately advocating "One Template To Rule Them All, And In The DC Bind Them."
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm specifically objecting to a trite algorithm as previously quoted.
 
Status
Not open for further replies.
Top