NFS, ZIL, Proxmox, and Performance.. Oh my.

Status
Not open for further replies.

mygeeknc

Dabbler
Joined
May 29, 2014
Messages
12
Like most new comers to FreeNAS, you learn that there is a hell of a lot more to it than just install and go. I have two older PowerEdge C2100 FS12-TY's.

Each server has the follow specs.
  1. Xeon Quad Core L5520 x1
  2. 36GB of ECC RAM
  3. 6 x Western Digital Red
  4. 1 x 60GB Kingston KC300 ZIL Drive
  5. HBA is PERC H700 (Yes I know this is frowned upon, we have a Intel M1015 coming)\
  6. HP Procurve gigabit switches
I want to use this box to serve VM's out via NFS to Proxmox.

Now, I've done enough research to know that I've made some mistakes in the setup that I have now. For example, I have setup a that we have a single volume that spans across the entire disks, using lz4 compression on raidz2.

Here is a screenshot of the current load on the system. http://i.imgur.com/5JGhSfc.png

Last night, on our secondary server I setup a new volume, using a dataset in a mirrored fashion with ZIL and started a rsync of our first nas to the second. I was getting around 15Mb/sec which does not seem right.

On the same note, though rsync doesn't use NFS, I've also heard that ZIL isn't really used unless NFS's sync is active. I can't figure out how to check this on Proxmox other than looking at the mount points which have no sync options enabled.

Other than the HBA and possibly more RAM, is there anything I'm doing wrong here? Am I expecting too much from this setup?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
rsync is slow. Get used to it. It was never billed as a fast product (it's single threaded so to speak and there is no way to fix this without a complete redesign of rsync, which the developer said isn't happening). It was only billed to minimize network traffic (and it does that fairly well). Keep in mind that you cannot have rsync running at the same time that the VMs themselves are being accessed. You'll end up with nonviable clones as a result.

ZFS replication is probably a better choice in terms of getting your data from one place to another. It *will* saturate your Gb LAN if your pool can actually perform at that speed (and yours certainly can/should). Note that again your VMs should be off when the snapshot takes place.

You are correct that the ZIL isn't used unless NFS' sync is active. You'd have to go to Proxmox to find out to what extent they use sync (if at all). You can monitor things to see what is being used with the zilstat command. But, I wouldn't trust this because there may be certain conditions for when sync is used, and at the time you are monitoring zilstat those conditions may not be met. So you need to get a solid answer straight from the horse's mouth for your data's sake. So go to Proxmox for that answer. ;)

36GB is not a lot of RAM for running VMs. When I did consultant work I didn't even consider a server for VM use without 64GB of RAM minimum (96GB is much better) and a nice L2ARC and/or ZIL as appropriate. I've had customers that wanted to run some VMs and went with 256GB of RAM to start. It depends heavily on many factors, far more than you're going to get from a forum setting. If you had consulted me for doing a machine for VM storage when I did consultant work and you weren't willing to spend at least $3000, I wouldn't take the job. It just wasn't worth it because when you aren't spending that kind of money on hardware you'll probably be unhappy and you *will* blame the consultant. It wasn't worth my time and effort to go through that kind of thing with a customer that doesn't know better but was going to be a penny pincher with me and then later blame me because they wouldn't spend the money. Got better things to do than deal with that drama. And believe me, people freak out when I ask them if they are going to spend that kind of money. And then after I give them a list of parts they'll need they'll argue with me over every single part trying to save a dollar everywhere they can. Been around here long enough to know where you can and can't save money, and $3000 is a pretty low budget for running VMs reliably and with enough performance to not have problems.

To be blunt, I think you are asking a bit much from your hardware. You're going to need more RAM, an L2ARC, and potentially a ZIL.

And while I'm being brutally honest I should add that VMs are basically the worst workload you can put on ZFS (and the hardest to get to work right). This isn't going to necessarily come quick or easy, and how long it takes you to get it to handle your workload is going to depend heavily on how much loading you have. If you've got heavily loaded VMs you can expect this to be very painful for you. (We've had people that spent lots of money and several months with VMs that wouldn't stay up because of insufficient hardware. One or two almost lost their jobs as a result.) On the other hand if they are only lightly loaded you might be able to get it to work 'well enough' that you can leave it alone.

Just don't underestimate the cost and complexity of going with ZFS for VMs.
 

mygeeknc

Dabbler
Joined
May 29, 2014
Messages
12
cyberjock, i really appreciate your response here and it is not taken lightly. It sounds to me that we may just need to look at a storage solution that is not ZFS based and just use these NAS servers for backups or some other type of file based storage rather than VMs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That is probably a good assessment of the situation based on what you've said previously.

But, in it's current setup (with the M1015 when it arrives) it could make a very good backup machine. So I think that's a very good use-case for the hardware. ;)
 
Status
Not open for further replies.
Top