Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

ESXi, ZFS performance with iSCSI and NFS

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
Hello!

A few month ago i bought the following all in one setup for personal use:

Hardware:
Motherboard: Supermicro X10SML-F
CPU: Intel E3 - 1230v3
RAM: 32GB ECC
Raid Card: IBM M1015 (IT mode)
Hard drives: 4x 2TB WD RED ( ZFS, RAIDZ ) , 2x WD 1TB WD RED , 1st 1TB Samsung

Software:
ESXi 5.5 + vCenter
FreeNAS 9.1.1
Pfsense 2.1

Network configuration:
LAN - Internal lan for both virtual and physical clients.
LAB - Isolated LAN through Pfsense used for lab machines.
iSCSI LAN - Only used for iSCSI traffic , MTU is set to 9000.

The idea was originally to run my ESX datastores using NFS and to be honest I kinda ignored the performance problems that come with this if you are not running a fast disk like SSD for ZIL . I was hoping that I would fix it anyway in someway.

Once I realized that I could not escape problems without buying a SSD to my server that have already gone over budget a number of times I decided to run my datastores via iSCSI instead which have been working pretty good, except that I get quite poor write performance compared to the read! Since a had some time and motivation to fix this lately I've made some some performance tests that I would need help to analyze.

Please look at the following pictures with different configurations:

FreeNAS 8GB, iSCSI and ZFS sync on
SGY4hix.png


FreeNAS 16GB, iSCSI and ZFS sync on
VPfzkBk.png


FreeNAS 8GB, NFS and ZFS sync off
tOfvphm.png


FreeNAS 16GB, NFS and ZFS sync off
Hv3CV8Z.png


Conclusion
  • ZFS likes RAM. Significantly better performance with 16GB instead of 8GB.
  • iSCSI is slower in both writes and IO
Question
  • Why is iSCSI so slow writing compare to NFS?
  • Which SSD would be good to be able achieve the performance I get with FreeNAS 8Gb and NFS?
Im very grateful for all the tips and help I can get!
Have a nice day!
// Anthon
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
Did you manually set your zvol to sync always for iscsi? The default does not sync with iscsi.

Is compression off?

Data written or read for testing should be double the amount of ram in the FreeNAS vm.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
Hello ser_rhaegar and thank you for your answer!

What do you mean by always? At the moment im running "zfs set sync=disabled" on the whole RaidZ.

Compression is off but how can I change the settings of my zvol?

So you mean that i should test with a file that is for example 32GB when running FreeNAS at 16GB?
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
You wrote in your test titles:
FreeNAS 16GB, iSCSI and ZFS sync on

But sync is most likely not running on that test unless you specifically set your zvol to sync=always.

I don't know that you can change compression for the zvol after creating it (it doesn't list that info in the web GUI or offer to change it there). When you create the zvol you can set compression on or off. Default is inherit.

Yes, if FN has 16GB of RAM, you want to run your tests with 32GB of data. Otherwise you're testing your ARC and not your disks.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
Yes that's correct, my bad..

Here is a test with sync=always
D5D9R57.png


Noop, but im pretty sure I didn't change anything when creating it so it should be set as inherit and since my whole RaidZ is set to off the zvol shouldn't have any compression.

I tried the copy a 32GB file between two local drives on a VM, both virtual hard drives is placed on the zvol. Here is the result with sync disabled:
seEvHN7.png



When running iSCSI should the ZFS sync be "enabled"?
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
If your data is important, you will want sync=always with iSCSI or a solid UPS system that shuts everything down in proper order if power is lost. I use option 2, with solid backups.
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
Also mirrors are better for performance with VMs than RaidZ1,2,3.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
If your data is important, you will want sync=always with iSCSI or a solid UPS system that shuts everything down in proper order if power is lost. I use option 2, with solid backups.


Ok! Right now I'm comparing blockio vs fileio. Are you running iSCSI or NFS? Can you explain a little bit more about your ESXi host 1?
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
I run iSCSI on a zvol. NFS was too slow for me in reads. I ran both with sync off for testing and I still use iSCSI with sync off.

ESXi host 1: FreeNAS with 12GB of RAM, 6 drives, 2TB each, RaidZ2 (not great for VMs). FreeNAS exposes a 500GB zvol via iSCSI. 10Gb internal SAN, linked to my switch with a 1Gb link on its own vlan. MTU 9000. ESXi host 2 connects to this with a 1Gb link, MTU 9000.

ESXi host 1 hosted VMs can reach 200-300MB/s
ESXi host 2 hosted VMs can reach 50-100MB/s

I don't have performance info for random I/O however none of my VMs are I/O intensive except Splunk which runs fine on either host. Hosts are clustered and I can move VMs around with vMotion easily. Both hosts have local storage so I can move VMs to local storage if I need to shut down either system.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
I run iSCSI on a zvol. NFS was too slow for me in reads. I ran both with sync off for testing and I still use iSCSI with sync off.

ESXi host 1: FreeNAS with 12GB of RAM, 6 drives, 2TB each, RaidZ2 (not great for VMs). FreeNAS exposes a 500GB zvol via iSCSI. 10Gb internal SAN, linked to my switch with a 1Gb link on its own vlan. MTU 9000. ESXi host 2 connects to this with a 1Gb link, MTU 9000.

ESXi host 1 hosted VMs can reach 200-300MB/s
ESXi host 2 hosted VMs can reach 50-100MB/s

I don't have performance info for random I/O however none of my VMs are I/O intensive except Splunk which runs fine on either host. Hosts are clustered and I can move VMs around with vMotion easily. Both hosts have local storage so I can move VMs to local storage if I need to shut down either system.


Ok, thank you for the information, what about backup, how do you backup your VMs? Is it to another pool or disks and what software do you use?
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
I replicate my pool from ESXi host 1 to ESXi host 2's FN VM. I swap the drives in ESXi host 2 every 30 days and rotate them off site.

I only just setup iSCSI over the last few days. I am actually testing the replicated zvol tonight to make sure it is mountable in a fresh ESXi install.

I also still run iSCSI in the default mode, so no sync. I have a UPS VM that cleanly shuts down my rack when power is lost. My rack is split between two 1500VA Cyberlink UPS units. My rack has a PDU which is split into two 7 outlet switched ports and each UPS covers one half of the PDU. I have to manually boot up the VMs after though as my pools are encrypted. My FN VMs and my DHCP/DNS/NTP VMs are on the local storage of the hosts so the network works without the other VMs.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
I replicate my pool from ESXi host 1 to ESXi host 2's FN VM. I swap the drives in ESXi host 2 every 30 days and rotate them off site.

I only just setup iSCSI over the last few days. I am actually testing the replicated zvol tonight to make sure it is mountable in a fresh ESXi install.

I also still run iSCSI in the default mode, so no sync. I have a UPS VM that cleanly shuts down my rack when power is lost. My rack is split between two 1500VA Cyberlink UPS units. My rack has a PDU which is split into two 7 outlet switched ports and each UPS covers one half of the PDU. I have to manually boot up the VMs after though as my pools are encrypted. My FN VMs and my DHCP/DNS/NTP VMs are on the local storage of the hosts so the network works without the other VMs.


Ok ser-rhaegar! Thank you so much for your help! One last question, would you recommend to buy 2 SSDs that is configure to run in a mirror for ZIL or simply run ZFS sync off with NFS and a UPS?

Thanks in advance!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ROFMLFAO.. set sync off! That's a HILAROUS question!!!!

And.. one that has been discussed TO DEATH in this forum.
 

leenux_tux

Member
Joined
Sep 3, 2011
Messages
238
The comment about RAID types is an important one for this type of setup (using FreeNAS as a ZVOL target for VM's).

As I understand it the further up the RAID "tree" you go (1, 2, 3) the more parity calculations have to be done therefore making writes a little slower for each RAID level.

I very recently migrated one of my pool's, a 4 disk RAIDZ1 (4X1TB) used as an iSCSI target to RAID10. Please bear in mind that I only did this a few days ago, have not done any tuning at all, just deleted the existing data, trashed to pool, recreated the pool, re-created the iSCSI volume and threw the data back from a backup; and I can already see marked improvement in VM boot up times and general navigation around the VM's.
 

ser_rhaegar

Senior Member
Joined
Feb 2, 2014
Messages
358
Whether or not to run a mirrored zil or set sync off is a question you'll have to answer on your own. I'm not willing to make a recommendation when it isn't my own data at risk.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, it's not that you changed RAID types from 3, 2, 1. It's that you doubled the number of vdevs. If you had gone to a 2-disk mirrored pool you'd still have the same I/O, but throughput would have dropped. With a RAID10 versus a RAIDZ1 that means an instant doubling of I/O.

So let's face it.. double the I/O means almost double the performance. ;)

The reality is that the performance impact from a RAIDZ1 versus a RAIDZ2 or RAIDZ3 is pretty minor as a CPU that is appropriate for FreeNAS can handle multiple GB/sec of throughput. So all if us with relatively piddly 100MB/sec or so throughput aren't taxing our CPU to any serious extent. My CPU from 2008 could do almost 3GB/sec. It was a really crappy socket 1366 CPU too.. about the worst of them all. It was 2Ghz without HT.
 

Stylaren

Neophyte
Joined
Feb 21, 2014
Messages
9
The comment about RAID types is an important one for this type of setup (using FreeNAS as a ZVOL target for VM's).

As I understand it the further up the RAID "tree" you go (1, 2, 3) the more parity calculations have to be done therefore making writes a little slower for each RAID level.

I very recently migrated one of my pool's, a 4 disk RAIDZ1 (4X1TB) used as an iSCSI target to RAID10. Please bear in mind that I only did this a few days ago, have not done any tuning at all, just deleted the existing data, trashed to pool, recreated the pool, re-created the iSCSI volume and threw the data back from a backup; and I can already see marked improvement in VM boot up times and general navigation around the VM's.



Good point leenux_tux. Please let me know when you had some time do run som tests. How many VM's are you running and do you use SLOG/ZIL device? Are you running sync = always?
 

leenux_tux

Member
Joined
Sep 3, 2011
Messages
238
This is a home/home office system doing different "jobs" for me (I work for myself) so only run a maximum of 4 VM's at any one time. There are nearly 20 VM's (various OS's) I can spin up so I guess it would be interesting to see how many I can get running and what the response/usability is like. I think my ESXi box would be the one to complain first though. It's fairly low spec, 8GB RAM, AMD Phenom quad core (Black Edition). No hard drive. Thumbdrive for ESXi5

In answer to your questions..

No SLOG/ZIL SSD (thought I am thinking of investing in one for testing/education purposes) and no changes to "sync" value so it's at the default, whatever that is.

To be honest, there are loads of other things I want to try. I have three NICs in my ESXi box (MPIO for iSCSI ?), my FN box has 5 NICS, one dedicated to IPMI (LAGG anyone?). All for education purposes plus I want to be able to make sure I am getting the best performance/availability/redundancy for the cash I have spent and share the experience/information as well.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
3,563
ROFMLFAO.. set sync off! That's a HILAROUS question!!!!

And.. one that has been discussed TO DEATH in this forum.

Discussed in depth quite well here:
http://forums.freenas.org/index.php...xi-nfs-so-slow-and-why-is-iscsi-faster.12506/

Here's a pull-quote from jgreco on this:

jgreco said:
These are the four options you have:
  1. NFS by default will implement sync writes as requested by the ESXi client. By default, FreeNAS will properly store data using sync mode for an ESXi client. That's why it is slow. You can make it faster with a SSD SLOG device. How much faster is basically a function of how fast the SLOG device is.
  2. Some people suggest using "sync=disabled" on an NFS share to gain speed. This causes async writes of your VM data, and yes, it is lightning fast. However, in addition to turning off sync writes for the VM data, it also turns off sync writes for ZFS metadata. This may be hazardous to both your VM's and the integrity of your pool and ZFS filesystem.
  3. iSCSI by default does not implement sync writes. As such, it often appears to users to be much faster, and therefore a much better choice than NFS. However, your VM data is being written async, which is hazardous to your VM's. On the other hand, the ZFS filesystem and pool metadata are being written synchronously, which is a good thing. That means that this is probably the way to go if you refuse to buy a SSD SLOG device and are okay with some risk to your VM's.
  4. iSCSI can be made to implement sync writes. Set "sync=always" on the dataset. Write performance will be, of course, poor without a SLOG device.

For server VMs, I use Option #4 (#1 also works). Data there is crucial and rolling back to a snapshot or losing a txg isn't an option. For VMs that I don't care about and accept the risk of having to roll back to snapshot/backup in case of serious failure, I use #3.
 

pbucher

Member
Joined
Oct 15, 2012
Messages
180
For server VMs, I use Option #4 (#1 also works). Data there is crucial and rolling back to a snapshot or losing a txg isn't an option. For VMs that I don't care about and accept the risk of having to roll back to snapshot/backup in case of serious failure, I use #3.


I use option #1 in an enterprise production environment for 16 months now without issue. The key is to use a good SLOG device if you have VMs from ESXi.
 
Status
Not open for further replies.
Top