I need to rebuild (or perhaps just reconfigure) my NAS

Status
Not open for further replies.

reflection

Cadet
Joined
Feb 12, 2013
Messages
6
Hello,

First post. First thing I want to say is that FreeNAS is awesome!! I'm glad someone introduced it to me.

After reading several posts of how people are getting write speeds of over 100MB/s, I figured I'm doing something wrong. I'm only getting about 10MB/s.

I recently built my NAS less than a month ago so I'm not oppose to rebuilding it. I have a server that is intended to be multi-purpose (NAS, Linux, Windows) so I'm running ESXi. It is only used at my house where there usually is just me accessing it.

Here are my server specs:
CPU: dual Xeon E5506 (4 core, 4 thread) processors (Nehalem-EP)
RAM: 32GB
Disk: 4 x 1TB 7200RPM SAS
NIC: dual 1G
RAID controller: LSI MegaRAID

Considerations/Questions:
1. I plan to continue to use ESXi 5.1 as my hypervisor to allow me run multiple servers. Does anyone suggest Hyper-v or Xen instead?
2. My first round, I built one logical RAID 5 volume and placed everything in there. I chose RAID 5 to get some redundancy and most space (as oppose to RAID1 or 6). Would I get better performance if I do not use my RAID controller and just add my physical HDD into ESXi and let FreeNAS do raidz?
3. I allocated 8G RAM to FreeNAS the first time, and I thought this was plenty. Since I'm the only user of this NAS, I think this should be fine, but I'm open to suggestions. Could this be limiting my speed to 10MB/s?
4. I create my FreeNAS VM using VM hardware 8. I realize this was wrong and will use VM hardware 7 next time. I provisioned 2 CPU and 2 cores.
5. I installed FreeNAS onto a 10GB partition and created a 2TB partition for my shared volumes (under ESXi). These partitions are thin provisioned. Should I be doing something else?
6. Does FreeNAS support LACP and if so, is anyone doing it? I saw someone reporting that they were hitting the theoretical limit of 1GE (iSCSI share). Perhaps a 2GE bundle would help. I have a 28port GE switch that supports LACP, but I'm not getting remotely close to GE. I would love to get there though.
7. My shares are all CIFS (for compatibility reasons), but I could create an iSCSI share if it makes sense.

How much is ESXi affecting my performance? I'm sure there is some overhead, but 10x slower is ridiculous.
What are others who are running FreeNAS under ESXi seeing in terms of performance? Did I just build it wrong?

All this time I was a happy camper until I started reading these performance posts today. Now I want more :smile:. LOL.

Thanks in advance.
 

vibe666

Dabbler
Joined
Oct 28, 2011
Messages
10
I have a similar set up as yourself, with the only difference being that I used JBOD on my 8 port controller and added the disks as RDM's to my FreeNAS VM in ESXi (5.0).

With this configuration (2CPU's and 8gb assigned to the VM) I get around 30MB/s, but that could well be the maximum speed of the disk on the nettop that I'm writing to (everything else is wireless or a VM).

I'm actually upgrading the physical host from a HP ML110 G6 (G6850 CPU and 8gb RAM) with 8x 2tb samsung spinpoints to an ML150 G6 (Dual Xeon X5570 & 16gb RAM) with the same physical disks plus a 250gb samsung 840 SSD for either an L2ARC in FreeNAS or to use as a high speed datastore for my VM's in ESXi (I still haven't decided) but I might end up getting a 2nd one so I can do both.

If you have the space, I'd flatten the disks and try presenting them to FreeNAS in ESXi as RDM's and use ZFS and see what you get as you have plenty of RAM and CPU grunt that FreeNAS should be able to handle the disks itself without taxing the (virtual) hardware.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How much is ESXi affecting my performance? I'm sure there is some overhead, but 10x slower is ridiculous.
What are others who are running FreeNAS under ESXi seeing in terms of performance? Did I just build it wrong?

Is it ridiculous.. yes. Is it fully understood that esxi kills performance, yep. If you aren't using PCI-passthrough expect performance to crater. RDM is not a fast system. If you read up on ZFS it says not to use it in virtualized environments for a bunch of reasons, including performance and reliability. Quite a few people have poor performance and lost all of their data because they tried to use ESXi. Don't expect complex reasons and explanations from the forum because those "in the know" gave up and don't bother answering people's complaints anymore regarding esxi. The manual says you shouldn't virtualize ZFS except for experimenting, ZFS wikis tell you it wants direct disk access, its mentioned on the forum a bunch of times that virtualizing your server isn't the smartest thing to do, what more is there to say? An explanation won't help you get the performance back.
 

reflection

Cadet
Joined
Feb 12, 2013
Messages
6
I have a similar set up as yourself, with the only difference being that I used JBOD on my 8 port controller and added the disks as RDM's to my FreeNAS VM in ESXi (5.0).

With this configuration (2CPU's and 8gb assigned to the VM) I get around 30MB/s, but that could well be the maximum speed of the disk on the nettop that I'm writing to (everything else is wireless or a VM).

I'm actually upgrading the physical host from a HP ML110 G6 (G6850 CPU and 8gb RAM) with 8x 2tb samsung spinpoints to an ML150 G6 (Dual Xeon X5570 & 16gb RAM) with the same physical disks plus a 250gb samsung 840 SSD for either an L2ARC in FreeNAS or to use as a high speed datastore for my VM's in ESXi (I still haven't decided) but I might end up getting a 2nd one so I can do both.

If you have the space, I'd flatten the disks and try presenting them to FreeNAS in ESXi as RDM's and use ZFS and see what you get as you have plenty of RAM and CPU grunt that FreeNAS should be able to handle the disks itself without taxing the (virtual) hardware.

Thanks for your reply.

I just did a FreeNAS install at work. Also in an ESXi (5.0) environment. This box only had one CPU (E5-2***). In my VM, I allocated 4GB RAM and 1 vCPU. I set it up as a type "7" VM hardware. Amazingly, I was getting 60MB/s on my CIFS share :).

I'm going to try downgrading my type "8" to a type "7" when I get home. In the Vshpere client, it actually says to use "7" for sharing.

I'll try doing the RDM if downgrading my VM hardware doesn't work.
 

reflection

Cadet
Joined
Feb 12, 2013
Messages
6
Is it ridiculous.. yes. Is it fully understood that esxi kills performance, yep. If you aren't using PCI-passthrough expect performance to crater. RDM is not a fast system. If you read up on ZFS it says not to use it in virtualized environments for a bunch of reasons, including performance and reliability. Quite a few people have poor performance and lost all of their data because they tried to use ESXi. Don't expect complex reasons and explanations from the forum because those "in the know" gave up and don't bother answering people's complaints anymore regarding esxi. The manual says you shouldn't virtualize ZFS except for experimenting, ZFS wikis tell you it wants direct disk access, its mentioned on the forum a bunch of times that virtualizing your server isn't the smartest thing to do, what more is there to say? An explanation won't help you get the performance back.

Thanks for your reply.

It's a trade-off. There are many benefits to a virtualized environment. I want to stick to a VM environment because of those benefits. I would like to achieve the best performance I can on FreeNAS in a VM environment. If that's going to be 20-30Mb/s, I'm okay with that. I want to understand better what the appropriate tweaks are.

I'm not willing to go away from a VM environment (savings in power, space, cost, etc) to achieve 70-100Mb/s.
 

vibe666

Dabbler
Joined
Oct 28, 2011
Messages
10
Thanks for your reply.

It's a trade-off. There are many benefits to a virtualized environment. I want to stick to a VM environment because of those benefits. I would like to achieve the best performance I can on FreeNAS in a VM environment. If that's going to be 20-30Mb/s, I'm okay with that. I want to understand better what the appropriate tweaks are.

I'm not willing to go away from a VM environment (savings in power, space, cost, etc) to achieve 70-100Mb/s.
ditto to this.

if freenas isn't going to work out in the long run (although it's been going fine for the best part of 2 years or more) then i'll switch to something else that WILL work better in a virtualised environment, but i'd rather not have to if i don't need to and so far, i don't.

i'd rather have a slow(ish) freenas VM and still be able to leverage use out of the unused physical hardware resources for other VM's than have a fast physical freenas box.

having said that, if there's a way to boost the performance of the VM then i'm all for that too. :)
 

reflection

Cadet
Joined
Feb 12, 2013
Messages
6
Looks like my LSI MegaRAID controller does not support JBOD mode. Oh well.

I did some tests - right now I'm getting about 18MB/s write and 75MB/s read performance. Seems like I'll stick with this for now since it's enough to saturate my wireless bandwidth (on wireless, I get about 13MB/s).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm just gonna put this out there.. do what you will with it.

1. RDM with FreeNAS(in particular ZFS) is NOT recommended. You can experience corruption and performance issues that cannot be fixed.
2. Using ZFS on a hardware raid is also NOT recommended because you have no self-healing properties for ZFS. So if you have any corruption at all with the file system it will be unrepairable. Since there is no fschk/chkdsk for ZFS you will basically have a corrupted file system that is not repairable. That's a bad place to be.

This is one of those situations where what you have is a working configuration, but it can quickly and easily become unworkable and you'll learn the hard way what you did wrong. The situation you are in is very similar to people that make a RAIDZ2, then add several disks individually which then has no redundancy for the new disks. Then when a disk fails the admin is shocked that all his data is lost and he thought he had redundancy. Usually he also adds that he has no backups and his entire family album is on there.. etc etc etc. This has happened far too many times, hence I wrote the noobies guide for FreeNAS to hopefully help people not do stupid things.

Just don't be surprised if one day something happens and you're left scratching your head.
 

reflection

Cadet
Joined
Feb 12, 2013
Messages
6
I'm just gonna put this out there.. do what you will with it.

1. RDM with FreeNAS(in particular ZFS) is NOT recommended. You can experience corruption and performance issues that cannot be fixed.
2. Using ZFS on a hardware raid is also NOT recommended because you have no self-healing properties for ZFS. So if you have any corruption at all with the file system it will be unrepairable. Since there is no fschk/chkdsk for ZFS you will basically have a corrupted file system that is not repairable. That's a bad place to be.

1. So you recommending not doing RDM (which I'm not)
2. And you're recommending not using ZFS with hardware RAID. So if I have hardware RAID (RAID 5 in my case), should I use UFS?

Not using my RAID controller is not an option. I have no way to bypass the controller. I could do a bunch of RAID 0 volumes (one per physical disk) and let ZFS handle it...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
1. So you recommending not doing RDM (which I'm not)
2. And you're recommending not using ZFS with hardware RAID. So if I have hardware RAID (RAID 5 in my case), should I use UFS?

Not using my RAID controller is not an option. I have no way to bypass the controller. I could do a bunch of RAID 0 volumes (one per physical disk) and let ZFS handle it...

Yes and Yes. If you read the manual it explains that ZFS shouldn't be used on hardware RAID. ZFS is a logical disk manager and it can't do its job properly if something else is also being a disk manager(aka your RAID controller). For the same reason that you shouldn't do RDM with ESXi(ESXi does its own logical disk management) which can backfire badly. Basically, if you can't do PCI-Passthrough in ESXi with your RAID controller you shouldn't be doing ZFS. Of course, some people do this and claim it works fine, and it will. And in theory it should work fine. The problem is if anything goes wrong there is pretty much no option to fix it.

I can't give advice with UFS because I've never used it. I switch to FreeNAS solely for the awesomeness that is ZFS. But UFS is supposed to be very reliable and trustworthy.
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
2. And you're recommending not using ZFS with hardware RAID. So if I have hardware RAID (RAID 5 in my case), should I use UFS?
Not using my RAID controller is not an option. I have no way to bypass the controller. I could do a bunch of RAID 0 volumes (one per physical disk) and let ZFS handle it...

Several ways how to achieve what you need:
1) Get another controller which supports JBoD ... imo best approach
2) Look around (google, etc) if there is hacked firmware for your LSI which will enable IT mode ==> JBoD support. Anyway be careful with this, you could brick your controller!
3) Configure each disk as raid0. Just be noted that in case of hotswapping/replacing disks, you will need to deconfigure old raid and configure new one.
 
Status
Not open for further replies.
Top