New ZFS Build - Low IOPS

Status
Not open for further replies.

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
Took one of our ZFS boxes down (8x RAIDZ3) and aimed to improve performance by upping the RAM from 16GB to 32GB and creating a RAID 1+0 array instead.

It has the following spec:
  • Xeon 1220L V2
  • 32GB ECC DDR3 RAM
  • Intel S3700 100GB SLOG
  • 8x Western Digital Velociraptors in stripe across 4x mirrors
  • LSI 9211-8i in IT mode
  • Supermicro SAS836EL1 backplane (supports a max of 1.5Gbps per SATA port)

Running IOMeter I'm getting approximately 100-150 IOPS write (4k random) within an ESXi VM over NFS. Shouldn't I be getting a lot higher than this since each disk on its own gets 300 IOPS write on 4k random writes according to storage review?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
All bets are off when you use ESXi virtualization, especially with NFS. This is extreme advanced black-belt voodoo, and performance will be very poor unless you know more about the ins and outs of virtualization than 99% of ESXi users do.

This is why there are about a thousand posts in the forum strongly dis-recommending FreeNAS virtualization with this kind of hypervisor. It's to the point, actually, that some of the forum admins just delete any new posts mentioning virtualization.

You will find your performance is outstanding if you do it on bare metal, as anodos suggests, almost certainly.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
do you run freenas within esx or on the hardware host iteself?

for testing just disable sync on the pool, vsphere nfs writes are always forces sync, this hurts.
remove your slog device, if you hvae added your ssd as "zil", re run your benchmarks
have you tested the network speed between your hardware boxes? use iperf!

did you enable autotune and restarted twice?
how many cpus are assigned to the nfs daemon?
what is your output frop "top -mio" on freenas?
have you upgraded your backplane firmware?
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
do you run freenas within esx or on the hardware host iteself?
Sorry, don't think I was clear. Absolutely - running on bare metal. Testing performance inside an ESXi guest running over NFS on to the FreeNAS server.

for testing just disable sync on the pool, vsphere nfs writes are always forces sync, this hurts.
remove your slog device, if you hvae added your ssd as "zil", re run your benchmarks

With sync disabled write IOPS doubled. Is this basically pointing to my SLOG as the issue?

It's an Intel S3700...are you trying to tell me that Intel are being slightly deceptive quoting 19000 IOPS random writes?

have you tested the network speed between your hardware boxes? use iperf!
900 Mbps

did you enable autotune and restarted twice?
Yep, it hurt performance in most cases, helped in a few.

how many cpus are assigned to the nfs daemon?
Configured it for 64 threads but where do I specify the number of CPUs assigned to NFS?

what is your output frop "top -mio" on freenas?
Seems fairly normal, NFS occasionally hits 100% of I/O.

have you upgraded your backplane firmware?
According to Supermicro I have the latest firmware.

Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
sync=disabled is basically an infinitely fast ZIL.

The S3700 is a very good device for a ZIL.

One thing I'll say is that benchmarking of FreeNAS is not cut and dry and standard benchmarking tools often provide very inaccurate real-world results. I can't explain more than that as iX has seen plenty of real-world numbers that work but benchmark values that are just terrible.

My advice is to just use the box and see how performance is. I will warn you that 32GB of RAM does not provide a good experience for VMs.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
64 nfs threads is far to much.
you can change the values under services, nfs
http://gamblisfx.com/wp-content/uploads/2014/04/nfs-server-freenas.jpg

the 190000 iops for your ssd.... is a wish, since slog is queue depth 1 and you can not change that.

I do not agree with cyberjock about benchmarks. you need numbers to compare if your settings make things better or worse and no feelings ; )

have you setup a iscsi connection? test this against your current setup.

regarding the supermicro jbod: we run our systems with sas disks, in a mixed sata setup we got extreme delay on disk access, but we never investigated.

btw, what nic are you using?

please also check ashift on your pool and get sure to test NFS speed from another host, like a linux box, too.
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
64 nfs threads is far to much.
you can change the values under services, nfs
http://gamblisfx.com/wp-content/uploads/2014/04/nfs-server-freenas.jpg
Thanks, re-read the FreeNAS docs and set to number of cpus.

have you setup a iscsi connection? test this against your current setup.

btw, what nic are you using?

Nope, haven't tried iSCSI. Is it superior in performance vs. NFS for ESXi hosts?

Using an Intel 82574L onboard 1GbE (in LACP configuration).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I do not agree with cyberjock about benchmarks. you need numbers to compare if your settings make things better or worse and no feelings ; )

You realize if I wanted to I could find 10 threads that immediately disprove what you just said there? In fact, you probably couldn't find 10 threads that disagree with me if you could search the entire forums for an unlimited period of time.

But good luck in any case. I'm not the one wasting their time and it's not my data or my server so I don't have to worry about this washing into my schedule. :)
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
Good testing can help a lot, but I think real world testing definitely can't be beat.
You need both I think but as cyberjock said, it takes a lot of time!

That said, some interesting numbers. All values are for 4k random writes (#32 I/Os) below:

With an 8x3TB 7.2k RAID Z2 array I'm seeing 552 IOPS.
With the above 8x500GB 10k RAID 10 array I'm seeing 1195 IOPS.
And just for kicks, with 2x 256GB Samsung 840 Pro striped I'm seeing 1855 IOPS.

Moving from the SAS expander backplane with a SATA 1 chipset to direct connections to the SATA 3 controller yielded a performance gain of approximately 10% with the 8x500GB 10k RAID 10 array.

Think that'll do for now.

I have a Synology Rackstation spinning away holding our general day to day ISOs and common files. Tempted to stick two extra SSDs in it and run some numbers after seeing the figures here:
http://www.storagereview.com/synology_rackstation_rs10613xs_review
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah.. 552 IOPS from a Z2.. not physically possible. More evidence that ZFS is difficult to benchmark. Wee!
 

leonroy

Explorer
Joined
Jun 15, 2012
Messages
77
All Arrays have an S3700 100GB SLOG disk...wouldn't that account for it?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That depends.. and you are just further explaining one of many factors why ZFS can't truly be "benchmarked". Doing one test, making a setting change, doing a second test and seeing numbers that increase doesn't actually mean that real-world performance has increased.

Anyway, this discussion is boring. I've had plenty of discussions on this topic and I have far better things to do than explain this all over again.
 
Status
Not open for further replies.
Top