Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Slow Performance writing to NFS mount at 2.5 MBps from ESXi

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

Chris Jones

Newbie
Joined
Nov 25, 2013
Messages
3
Hi there,

I am a new FreeNAS (and ZFS) user and I have built myself a physical FreeNAS server (to replace my Openfiler one) for use with my ESXi lab. Installation and setting up the volume and NFS share was quite straight forward, however I am getting very poor write performance.

I'm hoping someone can point me in the right direction as to how I can understand what the problem is and how I can do about making it a bit quicker! At the moment, I can't seem to write any faster than 2.5 megabytes (or 20 megabits) per second. I was hoping to write at about 40-50 MBps like I was with my Openfiler box.

My current setup:
CPU/Memory: i3 4130 (Haswell) / 16GB DDR3 1666MHz
Mobo: GA-H87M-D3H (all default settings)
Drives: 3 x 2TB SATA3 Seagate 7200rpm.
Dedicated 1Gbps network between host and FreeNAS box.

My one volume is configured in a Stripe. I tried to configure in a RaidZ, however there was little difference (RaidZ was slightly slower, but not noticeable). In my test, I migrated a VM from local storage in the ESXi host (which is a SAS 10k disk - No RAID) to the FreeNAS box via NFS. Also, I have configured NFS to map user root as per the VMware best practice doco.

I've attached some screenshots showing graphs of my network speed and write latency from ESXi. As you can see, the network speed suggests I'm receiving at about 20 Mbps and my write latency averaged around 1000ms for the duration of the transaction.

Configuration of my volume:
freenas-volumestatus.png


Graph of network throughput during write transaction:
freenas-network.png


Graph from ESXi showing disk latency averaging around 900-1000ms during write transaction:
freenas-latency.png


Any pointers as to where and how I can look into understanding and overcoming this issue would be greatly appreciated. Thanks in advance!

Regards,
Chris.
 

apofis

Newbie
Joined
Feb 8, 2012
Messages
3
Hi,
I using freenas for vmware about 4 years and I know this
  1. SATA 7200 drivers has big latency for multiple writes (I using 10k SATA)
  2. if you use FreeNAS as Datastore add ZIL drive (2 SSD in MIRROR, two SLC 8 GB SSD is great)
  3. if you haven't SATA ports (or money) :)you must disable ZIL
    1. ZFS 15 - add to boot config vfs.zfs.zil_disable="1"
    2. ZFS 28 - zfs set sync=disabled tank/dataset
  4. you have lots of RAM try add RAM cache (freenas 9 autotuning work fine)
  5. set timeout to 5s (standard is 30) - vfs.zfs.txg.timeout="5"
  6. NFS node maximum for your cpu is 2 (one per core)
  7. Use any SSD as read (L2ARC) cache disk
  8. If you using management switch add second LAN card and activate LACP (good for more then one VMWARE node)
  9. For database virtual drive try ISCSI, latency will be smaller
by Mira
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Disabling sync is data suicide. I have no idea where you even got the idea that it's safe to set sync=disabled. But its not something you should EVER do except for testing purposes. It should NEVER be run in a system that has data you trust. Period.

FreeNAS autotuning is designed for VERY large systems(aka >64GB of RAM). I don't understand why you think it should be used or why you think the system doesn't have a RAM cache to start.

The default timeout is already 5 seconds. I have no clue why you think its 30.

Adding an L2ARC isn't going to help much, nor do you just use any SSD. You have to rightsize your l2arc to your system RAM. But in short, an l2arc isn't going to help with his problem anyway.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
Short form: you're actually doing pretty well with what you have.

Please read my post here:

http://forums.freenas.org/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/#post-64328

Your ESXi is requesting your FreeNAS to do sync NFS writes. This is by VMware's design. Your Openfiler was probably ignoring the requests, or maybe you had twiddled it to ignore it. But basically you have a choice between doing it right or doing it halfarse. FreeNAS is doing it right for you. You can add a SLOG ZIL device to make it do it right faster. Or you can do it halfarse by breaking the ZIL somehow. If you break the ZIL it'll go crazy fast like you're used to, but your VMs may be damaged if anything unexpected happens.
 

apofis

Newbie
Joined
Feb 8, 2012
Messages
3
I know, disable ZIL is sucide, but for test of write speed is without problem. Two SSD disk are cheap and works nice.
I'ts true I haven't know new timeouts, but i now this works fine :)
I using server with 32 GB of RAM, disabled dedup, and enabled lz4 compression level.
Full working NAS with, six 10k WD Disk + 2x SSD ZIL + 64Gb SSD Cache, E3-1230 V2 @ 3.30GHz, 32GB ECC RAM, Intel Server Board, works for 3 VM nodes without any problem
 

Chris Jones

Newbie
Joined
Nov 25, 2013
Messages
3
Thanks for the link @jgreco. Very helpful for ZFS beginners like me.

Whilst this is only a lab, I am keen to ensure that I'm not putting my data at risk. I've read from a few places that Openfiler doesn't perform sync NFS writes, and that ZFS is far better at ensuring data integrity.

I did have an SSD drive that I could throw into it to see what happens. Whilst my ZFS terminology isn't great, I used the FreeNAS GUI to add this disk as cache to my now 4 SATA3 7200 disks. Here is a capture of this new setup:

freenas-ssd-config.png

Both metrics I was watching (network speed and disk latency at the ESXi host) have improved by about 10% and 17% respectively.

freenas-network2.png


(averaging about 21Mbps as opposed to 19Mbps in my original post)

freenas-esx-latency.png


(averaging about 750ms as opposed to 900ms in my original post).

So with the simple addition of a cheap consumer grade SSD as some form of cache, I was able to get a performance gain. However, I still don't understand why I'm a long way off the following:

- 1000Mbps network theoretical throughput of 125Mbps. Currently not able to push beyond 21Mbps (or 2.1% of capacity).
- SATA3 7200 disk where I'd be expecting to write (especially in a striped non-redundant configuration) of at least 50MB/s (or 400Mbps) which is well below the Average Data Rate of 156MB/s for this drive.

Surely ZFS/FreeNAS doesn't have an overhead that's causing this much of an impact.
 

Chris Jones

Newbie
Joined
Nov 25, 2013
Messages
3
Forgot to mention, the output above is from moving a 15GB of VM from 10k SAS local disk on the ESXi host through the 1Gbps network to the FreeNAS box. In the end, it took 101 minutes to copy 15GB. That's about 2.6MB/s.

However, if I move the VM back to the localhost from the FreeNAS box (ie. read from FreeNAS and write to localhost), I get some fantastic results. Took 7 mins to move the same 15GB or about 36MB/s.

freenas-3-network.PNG


Now I can see my network adapter at capacity.

freenas-3-esx-latency.png


And my latency is much much better.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And you do know that your cache drive should never exceed 5x your arc(which is usually 70-80% of your system RAM)? So your 128GB l2arc is WAY too big for your system. In fact, if you oversize it, you can actually hurt performance more than not having it at all!
 

KTrain

Member
Joined
Dec 29, 2013
Messages
36
This thread was very helpful for me. I'm working through some VMware/NFS performance issues in my home lab as well.. I now have some things to look in to. Thanks to all who have contributed.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
And you do know that your cache drive should never exceed 5x your arc(which is usually 70-80% of your system RAM)? So your 128GB l2arc is WAY too big for your system. In fact, if you oversize it, you can actually hurt performance more than not having it at all!

So NFS is slow since it has no l2arc?

How big SSD disk do you have I have 256GB RAM?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
NFS has no L2ARC because NFS is a file sharing protocol. ZFS can have L2ARC because it is a filesystem that has caching capabilities. If you run NFS on top of a ZFS pool, you can have L2ARC. But Cyberjock is warning against having irrational ratios of ARC:L2ARC. The specific issue is that with only 16GB of RAM, the poster has a bad ratio (16GB:128GB) and really doesn't have enough ARC to support L2ARC in the first place. Usually you want to have 64GB or more RAM before you look at L2ARC.

With 256GB of RAM, you could pretty easily add two 512GB SSD's, possibly bigger. I have to run right now but there's plenty of discussion about this out there.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
NFS has no L2ARC because NFS is a file sharing protocol. ZFS can have L2ARC because it is a filesystem that has caching capabilities. If you run NFS on top of a ZFS pool, you can have L2ARC. But Cyberjock is warning against having irrational ratios of ARC:L2ARC. The specific issue is that with only 16GB of RAM, the poster has a bad ratio (16GB:128GB) and really doesn't have enough ARC to support L2ARC in the first place. Usually you want to have 64GB or more RAM before you look at L2ARC.

With 256GB of RAM, you could pretty easily add two 512GB SSD's, possibly bigger. I have to run right now but there's plenty of discussion about this out there.

Will it work well if I buy two 800GB SSD?

But how is NFS slow, what's the matter?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
With 256GB of RAM, you could support 1600GB of SSD with the right combination of block size and other parameters. This is a dicey area of system design because you can create scenarios that will tend to be problematic. For example, if you are using NFS and a 128KB record size, the size of objects being stored in the L2ARC will tend towards being bigger. If you're doing iSCSI with an 8KB blocksize, the size of objects is smaller, pushing up the number of objects, which in turn increases pressure on the ARC, which is where ZFS has to maintain the tables of what is out in the L2ARC.

Your speeds via NFS will tend to reflect how well your system is built. If you have 256GB of RAM and lots of L2ARC, reading the same things over and over, you're likely to have a better time of it than the guy above with 16GB. If you don't have sync writes, that's better than needing sync writes. If you need sync writes and you have a SLOG, that's better than not having a SLOG.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
With 256GB of RAM, you could support 1600GB of SSD with the right combination of block size and other parameters. This is a dicey area of system design because you can create scenarios that will tend to be problematic. For example, if you are using NFS and a 128KB record size, the size of objects being stored in the L2ARC will tend towards being bigger. If you're doing iSCSI with an 8KB blocksize, the size of objects is smaller, pushing up the number of objects, which in turn increases pressure on the ARC, which is where ZFS has to maintain the tables of what is out in the L2ARC.

Your speeds via NFS will tend to reflect how well your system is built. If you have 256GB of RAM and lots of L2ARC, reading the same things over and over, you're likely to have a better time of it than the guy above with 16GB. If you don't have sync writes, that's better than needing sync writes. If you need sync writes and you have a SLOG, that's better than not having a SLOG.

Because when i turn off sync I get up at better speed but if I'm on sync then the speed will only be around 2MB / sec so when I get SLOG and turn on .sync I will get better speed?

Since my English is not good, it takes longer to understand 100%.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

I will summarize this as follows:

1) Sync writes without SLOG are very slow.

2) Sync writes with a SLOG device will be moderately fast, but nowhere near as fast as turning off sync.

3) Turning off sync is as fast as it gets.

Sync is a very complicated guarantee that the system is making to you that you'll be able to retrieve a bit of data, even if there's a crash or other adverse event.

Think of a banker. He's got a bank vault full of money. If he has to go to the vault, unlock it, open it, do a transaction, close it, lock it, and return to his desk each time a customer asks for something, that's more secure but also very slow. So usually they handle things out of a cash till and only go to the vault when they really need to. The cash till is the thing that is more risky, but it is also much quicker to work with.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

I will summarize this as follows:

1) Sync writes without SLOG are very slow.

2) Sync writes with a SLOG device will be moderately fast, but nowhere near as fast as turning off sync.

3) Turning off sync is as fast as it gets.

Sync is a very complicated guarantee that the system is making to you that you'll be able to retrieve a bit of data, even if there's a crash or other adverse event.

Think of a banker. He's got a bank vault full of money. If he has to go to the vault, unlock it, open it, do a transaction, close it, lock it, and return to his desk each time a customer asks for something, that's more secure but also very slow. So usually they handle things out of a cash till and only go to the vault when they really need to. The cash till is the thing that is more risky, but it is also much quicker to work with.

Thank you for your explanation, I understand better.

If you get super fast SSD like SLOG, you can come up at better speeds?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
Some fast NVMe SSD devices are better at SLOG than older SATA devices.

Both are slower than disabling sync writes.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
Some fast NVMe SSD devices are better at SLOG than older SATA devices.

Both are slower than disabling sync writes.

Then I know. How fast do you think it would be?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,443
It depends. If you're doing small block writes, like VM storage, it is really difficult to do well. You may see a 33-50% drop in write performance over just turning off sync.

This really matters in some environments, but not so much in others. The big thing to worry about is if a non-sync filer that is running VM's crashes while the VM's continue to run. In this case, the VM's may have committed data to virtual storage that never actually gets updated on real disks, at which point you have VM disk corruption.
 

kikotte

Member
Joined
Oct 1, 2017
Messages
75
It depends. If you're doing small block writes, like VM storage, it is really difficult to do well. You may see a 33-50% drop in write performance over just turning off sync.

This really matters in some environments, but not so much in others. The big thing to worry about is if a non-sync filer that is running VM's crashes while the VM's continue to run. In this case, the VM's may have committed data to virtual storage that never actually gets updated on real disks, at which point you have VM disk corruption.

Then I understand.

If you drive with FCoE, would it work better?

Supports FreeNAS FCoE?
 
Status
Not open for further replies.
Top