Slow performance on ZFS/NFS with SSD

Status
Not open for further replies.

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Hi Guys

I've done some tweaking with performance when i was running 4 disks in raid 1+0 under freenas and passing through using NFS to my esxi lab. I added an ssd running zil/l2arc and all was well.

Finally decided to upgrade the storage, so i got a 750GB samsung 840 EVO and added to my freenas box.

however when I map over NFS i get really poor performance.

i've tried with sync enabled/disabled
locally on the freenas box i'm getting 300Mbps+ but from ESXi i get nothing like this.

running low end vm's i'm getting write and read latencies over 1000ms, can't make sense of it since the source drive is an SSD it should be lightning fast, any ideas?
 

Enlightend

Dabbler
Joined
Oct 30, 2013
Messages
15
Have you tried with iscsi instead of nfs?
With ESXi especially I find NFS requires absurd amounts of fine tuning and what I would consider unsafe tweaks to get it to work fast enough to be useful.
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Have you tried with iscsi instead of nfs?
With ESXi especially I find NFS requires absurd amounts of fine tuning and what I would consider unsafe tweaks to get it to work fast enough to be useful.


Yeh I've tried with iSCSI both as a disk extend and a file extent, no luck either way.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Please post your full hardware setup. Not sure what your config is, but a 750GB l2arc is probably way too big for your system RAM. Your specs will tell me if it is. ;)
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Please post your full hardware setup. Not sure what your config is, but a 750GB l2arc is probably way too big for your system RAM. Your specs will tell me if it is. ;)


Sorry realised i wasnt clear on my original post.
was originally running a 4 disk raid with 30gb/30gb zil/l2arc.
the 750GB is a standalone disk not an l2arc.

Specs are

Core i5
8GB RAM
3 x 1 GB NICs
2 for NFS/iSCSI
1 for CIFS/SMB
2nd array
2 x 2TB 2x 3TB drives in raid 1+0
750GB Samsung 840 EVO standalone
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You need to upgrade your RAM. I'd go to 16GB at the minimum and 32GB would be preferred. Running VMs off of FreeNAS needs lots of resources to ensure good performance.

Your l2arc should never exceed about 5x your RAM. This is because the l2arc needs RAM for the index. In your case, you probably made performance worse by using the l2arc with so little RAM. So to compensate for wanting/needing/using an l2arc you have to put even more RAM in the system. Hence my recommendation of 32GB of RAM.

Remember that thing that I said above about you needing more RAM because of your l2arc? I was right. Even at 30GB of l2arc, that's more than what you can realistically use with only 8GB of RAM anyway. Common noobie mistake. Easy fix.. more RAM.

You didn't really include all of your specs. The exact CPU and motherboard would have been useful as Intel NICs are preferred. But I wouldn't look at trying a different NIC until you have 32GB of RAM. And since you are using an i5(which means you aren't using ECC RAM) I'll link you to this http://forums.freenas.org/threads/ecc-vs-non-ecc-ram-and-zfs.15449/ and highly recommend you go to a motherboard and CPU that support ECC RAM as well as ECC RAM if your pool is important.
 

Enlightend

Dabbler
Joined
Oct 30, 2013
Messages
15
@Cyberjock, he isn't using an L2ARC, his actual store is a 750GB SSD, it's not a 750GB L2ARC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@Cyberjock, he isn't using an L2ARC, his actual store is a 750GB SSD, it's not a 750GB L2ARC.

I agree. When you start choosing to use an l2arc you put stress on the arc, hence my recommendation for 32GB of RAM.
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
I agree. When you start choosing to use an l2arc you put stress on the arc, hence my recommendation for 32GB of RAM.


That's why I didn't go the large l2arc path. At the moment I have no l2ARC configured.
just direct ZFS on an SSD.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's why I didn't go the large l2arc path. At the moment I have no l2ARC configured.
just direct ZFS on an SSD.

But you mentioned above that you have a 30GB for l2ar/zil. I am confused...
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
But you mentioned above that you have a 30GB for l2ar/zil. I am confused...


Sorry if I was unclear that was the previous setup which was working fine.
I've upgraded the storage so instead of having the l2arc/zil and the 4 disks, I was planning to run my VM's direct from the 750GB SSD which is a straight zfs volume with no l2arc/zil.
but I'm getting really poor performance from it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
But you really do have just 8GB of RAM, right?

Generally, as soon as people say they want to use zpool as a datastore you are automatically implying a system with lots of RAM. That's one reason I said nothing less than 16GB and 32GB would be better. Upgrading your RAM is virtually a necessity as soon as you start saying you want the datastore on a zpool. The RAM helps minimize latency and having an unnecessarily high turnover of the cache. I realize that SSDs have almost no latency, but the high turnover of the cache from insufficient RAM makes it very difficult to get good performance from a pool.

I'd seriously consider just going to 32GB of RAM and be done with it. If money is tight you can try upgrading to just 16GB but don't be surprised if it doesn't help enough to make you happy.
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
But you really do have just 8GB of RAM, right?

Generally, as soon as people say they want to use zpool as a datastore you are automatically implying a system with lots of RAM. That's one reason I said nothing less than 16GB and 32GB would be better. Upgrading your RAM is virtually a necessity as soon as you start saying you want the datastore on a zpool. The RAM helps minimize latency and having an unnecessarily high turnover of the cache. I realize that SSDs have almost no latency, but the high turnover of the cache from insufficient RAM makes it very difficult to get good performance from a pool.

I'd seriously consider just going to 32GB of RAM and be done with it. If money is tight you can try upgrading to just 16GB but don't be surprised if it doesn't help enough to make you happy.


Unfortunately the hardware will only do the 8GB.

I'm not using l2arc cache so i don't what you mean. Also there doesn't appear to be the cache issue your mentioning as when i use dd to do performance testing locally on the freenas box I don't have any performance issues. I'm not aware of a caching system that sits between the FS and the network.
The NIC's i'm using are 2x Intel Pro 1000GT and an onboard broadcom, no difference if I force the connection through the intel's or the broadcom.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Unfortunately the hardware will only do the 8GB.

Then you really have no options to fix your latency unfortunately. :(

I'm not using l2arc cache so i don't what you mean. Also there doesn't appear to be the cache issue your mentioning as when i use dd to do performance testing locally on the freenas box I don't have any performance issues. I'm not aware of a caching system that sits between the FS and the network.

The caching is ZFS's caching. It uses no more than7/8th of your system RAM for its various uses. The ARC, transaction buffering, etc. That's why if you read the manual section 1.3.2 it says:


The best way to get the most out of your FreeNAS® system is to install as much RAM as possible. If your RAM is limited, consider using UFS until you can afford better hardware. FreeNAS® with ZFS typically requires a minimum of 8 GB of RAM in order to provide good performance and stability. The more RAM, the better the performance, and the FreeNAS® Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 8 TB), a general rule of thumb is 1 GB of RAM for every 1 TB of storage

RAM is the secret sauce to ZFS's performance(or lack thereof). Since you can't upgrade beyond 8GB of RAM you really are kind of stuck between a rock and a hard place. As a home server, if its not too big, you might be okay with 8GB of RAM. You won't care if a movie takes an extra 200ms or more to send data as your video player should a few seconds of buffer to prevent video hiccups. But for VMs, 8GB of RAM just isn't going to cut it. Requiring low latency with 8GB of RAM is just not possible unless you want to switch to UFS.



The NIC's i'm using are 2x Intel Pro 1000GT and an onboard broadcom, no difference if I force the connection through the intel's or the broadcom.

I'm not really sure why you brought this up. I didn't mention your NIC being a problem at all. I said that Intel NICs are preferred, but that's all I said about it. I also said previously that you shouldn't worry about the NIC until you get 32GB of RAM. Upgrading RAM is going to provide a far larger change in performance than changing from one of those POS Realteks to an Intel NIC for your type of load. Of course, you don't have a Realtek, but it doesn't matter.

What you should take away from this whole thread is this...

Ram is the secret sauce to ZFS's performance. If you don't have enough you won't be happy.

While I'm sorry your server isn't performing so well, you have what the manual clearly labels as the minimum. For reasons that you are seeing firsthand I never recommend people build servers that meet the minimum requirements when they are already maxed out. Having some extra breathing room is very important at times and can save you from having to buy all new hardware. In your case, you are looking at going to UFS or going to new hardware. :(
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81

I understand what your saying, and the part you mentioned was FreeNAS® with ZFS typically requires a minimum of 8 GB of RAM in order to provide good performance and stability. This strikes me as more of a recommended for good performance rather than bare minimum. I've previously run it with less and since upgraded to 8GB and saw benefit.

However, i don't believe this is the bottleneck, As i mentioned I'm getting great performance through ZFS when accessing it locally. If ram was the bottleneck locally in my system the expected behaviour would be slow local performance which i'm not seeing. I only see bad performance when I mount it remotely.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I understand what your saying, and the part you mentioned was FreeNAS® with ZFS typically requires a minimum of 8 GB of RAM in order to provide good performance and stability. This strikes me as more of a recommended for good performance rather than bare minimum. I've previously run it with less and since upgraded to 8GB and saw benefit.

No, we've had lots of users that have had their zpool become unmountable(aka lost their pool permanently) with less than 8GB of RAM, hence that warning. That warning was added by myself and approved by others as a warning. We used to see someone show up weekly that lost their data because they had less than what we now call the "minimum". Even now, we've had 2 people in the last 2 weeks with pools that wouldn't mount with 8GB of RAM. I chose to use the words "good performance" and "stability" after careful consideration. That statement is as accurate as its going to get for general use. It's up to the administrator to make the decision on what hardware to use for their system.

However, i don't believe this is the bottleneck, As i mentioned I'm getting great performance through ZFS when accessing it locally. If ram was the bottleneck locally in my system the expected behaviour would be slow local performance which i'm not seeing. I only see bad performance when I mount it remotely.

System behavior is differently when locally versus over a share. Your comparison doesn't tell us anything.

Go read around the forums. I said earlier you aren't alone with this NFS problem with ESXi. And people with low RAM are told to add more RAM.
 
Status
Not open for further replies.
Top