FreeNAS, Nexenta, Nas4free comparison. Help tune FreeNAS

Status
Not open for further replies.

ta ma

Cadet
Joined
Jul 10, 2014
Messages
2
I've been testing ZFS on different platforms, FreeNAS, Nexenta 3.5.1, Nexenta 4.0.2, Nas4free and have gotten mixed results on all of them.
The best results so far was from Nexenta 3.5.1

How can I get my FreeNAS performance up?

My system is a Dual Intel L5520 with 70 GB RAM, 10G Intel NIC. FreeNAS is installed on a RAID1 on the Areca with 2x Intel 520 SSDs. ZFS drives include 14x 2TB Hitachi SATA and 8x Intel 520 SSDs in pass through mode on the Areca card.

My benchmark tests include mounting the NAS from an ESXi 5.5 server and loading up a Windows 2008 R2 64-bit VM and running Anvil and ATTO benchmarks tests. Then I clone 6 VMs of VMware I/O Analyzer and run the Exchange 2007 instances together. Those who aren't familiar with VMware I/O Analzyer, the Exchange 2007 test loads IOmeter and runs 8k - 55%read 80% random.

FreeNAS, Nexenta 4.0.2, and Nas4Free all have poor performance when compared to Nexenta 3.5.1 for some reason.
Results from Nexenta 3.5.1 get a sum of 59k IOPs running the 6x VMware I/O Analyzer load.
Nexenta 4.0.2, Nas4Free, FreeNAS range from 10k to 30k. which is no where close to 59k.

Same test running on LSI 9271-CacheCade with 4x Intel dc s3500 cachecade, and 16x 2TB SAS drives RAID10 get about 57k IOPs as well on the same 6x VMware I/O Analyzer test.

Why is Nexenta 3.5.1 producing good results compared to other ZFS platforms. Why is Nexenta 4.0.1 not producing better results when it claims 4x more performance? It's more like 3x less performance.

I don't want to stick with Nexenta because of the size limitation on the free version. If I can produce good results on FreeNAS, I have more RAM laying around which I can probably upgrade to 128GB. I have a 45-bay JBOD that I would like to attach as well with more 2TB drives and more SSDs.

I've tried a bunch of different ways to raid and mirror the drives. Any attempts to tweak so far have little or negative impact. Tweaks include adjusting a few of the tunables like arc memory size, changing up raid configuration. Trying whatever I can find in this forum.
How can I yield similar results or even surpass 59k???

I have all these extra Areca raid cards, SATA and SSDs because we added Nimble Storage and also added the LSI Cachecade system I mentioned above. We're planning to use the systems for backup storage, but want to see if I can get extra boost in performance by adding ZFS.
After all this testing and poor results maybe its just better to stick with the original configuratio on the Areca RAID controller and run it on hardware RAID.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, ditch the Areca card. The manual says no raid card and we are serious. ZFS + RAID means not only poor performance but all that reliability you *think* you are getting doesn't exist because you used a hardware RAID

Second, Virtually nobody here uses the other OSes you mention, so we can't provide any comparison. But I will tell you that more than 90% of the time when someone comes in here and says "FreeNAS is slower than X" they haven't bothered to look under the hood and figure out things like "X doesn't use ZFS" or other things that should be blatantly obvious and matter a whole lot.

Other than that, all I can say is that the defaults are what you *should* be sticking with unless you are a FreeNAS pro. 99% of the time when people start playing with tunables, sysctls, or custom settings they are doing very stupid things and do not understand anything they are doing. The defaults work fine 99% of the time. If something else were better 99% of the time it would be the default. The fact its not should be "your sign".

Other than that I can't say much else. ZFS' caching subsystem can basically lie to benchmarks, so unless you are a pro at benchmarking ZFS you probably aren't really doing a comparison that actually "means" something. People regularly post bogus benchmarks that show that a single disk does 4GB/sec+, which we all know isn't even close to possible since that's only like 5x faster than SATA3. ;)
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
First about the test: "the Exchange 2007 test loads IOmeter and runs 8k - 55%read 80% random." -- consider the fact, that ZFS as copy-on-write filesystem has to rewrite the full block even if only part of it was modified. Even worse happens if block is not in cache, and it should be read before written. So unless you already done that, consider tuning dataset recordsize to match client filesystem block and/or database record size.

What's about FreeNAS iSCSI -- 9.2.1.6 includes new experimental kernel iSCSI implementation, which you can try. On my tests with high-IOPS workload it twice overpowers the old one (200K IOPS instead of 100K) on many-CPU systems. Also just now I committed into the branch of following FreeNAS 9.3 set of patches that increase that number on my tests up to 600K IOPS, if several LUNs access simultaneously.

You haven't specified the FreeNAS version you are running. If you are using iSCSI LUNs backed by ZVOLs, make sure you are running FreeNAS 9.2.1.6, since it was optimized for such high IOPS on ZVOLs.

If you have extra HBA cards, you may use several of them to spread the load. Even if you are not utilizing full interface bandwidth, single card may become IOPS bottleneck. Present FreeNAS based on FreeBSD 9.x is not yet ready to get full possible benefit of that, but with present FreeBSD 10.x I was able to pass almost a million IOPS through four LSI HBAs to array of SSDs. And if you are using SAS expander(s) -- make sure they can pass that I/O rate, otherwise just use direct wiring, at least for SSDs. And surely do not cascade the expanders.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
nexenta 3 used opensolaris, v4 Illumos.
different caching and buffer systems. using zfs with anything other then a small system means learning, testing and starting over again ;) vmware io analyzer is nice, but you have to tune the system in dependence of your real work load.
 

ta ma

Cadet
Joined
Jul 10, 2014
Messages
2
@cyberjock, the raid card is set to pass through on the drives that belong to the ZFS volume.

@mav, i did try adding another HBA on another test system but doesn't seem to be helping much. I'm using 9.2.1.6 btw. I tried experimental after you mentioned it and doesn't seem to make any difference

Should I be expecting less performance than if the drives were running hardware RAID10? I'm getting results that seem to be under par. With SSD caching enabled I thought I would be able to get a little bit better results with ZFS.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@cyberjock, the raid card is set to pass through on the drives that belong to the ZFS volume.

Right, I got that. But on Areca cards that doesn't change enough to make it a card we recommend. I have 2 Areca cards and I used one for almost 5 months. I also almost lost my pool because of Areca. So no, still don't recommend it, its still not recommended on our hardware recommendations sticky, and will likely never be recommended.
 
Status
Not open for further replies.
Top