I've been testing ZFS on different platforms, FreeNAS, Nexenta 3.5.1, Nexenta 4.0.2, Nas4free and have gotten mixed results on all of them.
The best results so far was from Nexenta 3.5.1
How can I get my FreeNAS performance up?
My system is a Dual Intel L5520 with 70 GB RAM, 10G Intel NIC. FreeNAS is installed on a RAID1 on the Areca with 2x Intel 520 SSDs. ZFS drives include 14x 2TB Hitachi SATA and 8x Intel 520 SSDs in pass through mode on the Areca card.
My benchmark tests include mounting the NAS from an ESXi 5.5 server and loading up a Windows 2008 R2 64-bit VM and running Anvil and ATTO benchmarks tests. Then I clone 6 VMs of VMware I/O Analyzer and run the Exchange 2007 instances together. Those who aren't familiar with VMware I/O Analzyer, the Exchange 2007 test loads IOmeter and runs 8k - 55%read 80% random.
FreeNAS, Nexenta 4.0.2, and Nas4Free all have poor performance when compared to Nexenta 3.5.1 for some reason.
Results from Nexenta 3.5.1 get a sum of 59k IOPs running the 6x VMware I/O Analyzer load.
Nexenta 4.0.2, Nas4Free, FreeNAS range from 10k to 30k. which is no where close to 59k.
Same test running on LSI 9271-CacheCade with 4x Intel dc s3500 cachecade, and 16x 2TB SAS drives RAID10 get about 57k IOPs as well on the same 6x VMware I/O Analyzer test.
Why is Nexenta 3.5.1 producing good results compared to other ZFS platforms. Why is Nexenta 4.0.1 not producing better results when it claims 4x more performance? It's more like 3x less performance.
I don't want to stick with Nexenta because of the size limitation on the free version. If I can produce good results on FreeNAS, I have more RAM laying around which I can probably upgrade to 128GB. I have a 45-bay JBOD that I would like to attach as well with more 2TB drives and more SSDs.
I've tried a bunch of different ways to raid and mirror the drives. Any attempts to tweak so far have little or negative impact. Tweaks include adjusting a few of the tunables like arc memory size, changing up raid configuration. Trying whatever I can find in this forum.
How can I yield similar results or even surpass 59k???
I have all these extra Areca raid cards, SATA and SSDs because we added Nimble Storage and also added the LSI Cachecade system I mentioned above. We're planning to use the systems for backup storage, but want to see if I can get extra boost in performance by adding ZFS.
After all this testing and poor results maybe its just better to stick with the original configuratio on the Areca RAID controller and run it on hardware RAID.
The best results so far was from Nexenta 3.5.1
How can I get my FreeNAS performance up?
My system is a Dual Intel L5520 with 70 GB RAM, 10G Intel NIC. FreeNAS is installed on a RAID1 on the Areca with 2x Intel 520 SSDs. ZFS drives include 14x 2TB Hitachi SATA and 8x Intel 520 SSDs in pass through mode on the Areca card.
My benchmark tests include mounting the NAS from an ESXi 5.5 server and loading up a Windows 2008 R2 64-bit VM and running Anvil and ATTO benchmarks tests. Then I clone 6 VMs of VMware I/O Analyzer and run the Exchange 2007 instances together. Those who aren't familiar with VMware I/O Analzyer, the Exchange 2007 test loads IOmeter and runs 8k - 55%read 80% random.
FreeNAS, Nexenta 4.0.2, and Nas4Free all have poor performance when compared to Nexenta 3.5.1 for some reason.
Results from Nexenta 3.5.1 get a sum of 59k IOPs running the 6x VMware I/O Analyzer load.
Nexenta 4.0.2, Nas4Free, FreeNAS range from 10k to 30k. which is no where close to 59k.
Same test running on LSI 9271-CacheCade with 4x Intel dc s3500 cachecade, and 16x 2TB SAS drives RAID10 get about 57k IOPs as well on the same 6x VMware I/O Analyzer test.
Why is Nexenta 3.5.1 producing good results compared to other ZFS platforms. Why is Nexenta 4.0.1 not producing better results when it claims 4x more performance? It's more like 3x less performance.
I don't want to stick with Nexenta because of the size limitation on the free version. If I can produce good results on FreeNAS, I have more RAM laying around which I can probably upgrade to 128GB. I have a 45-bay JBOD that I would like to attach as well with more 2TB drives and more SSDs.
I've tried a bunch of different ways to raid and mirror the drives. Any attempts to tweak so far have little or negative impact. Tweaks include adjusting a few of the tunables like arc memory size, changing up raid configuration. Trying whatever I can find in this forum.
How can I yield similar results or even surpass 59k???
I have all these extra Areca raid cards, SATA and SSDs because we added Nimble Storage and also added the LSI Cachecade system I mentioned above. We're planning to use the systems for backup storage, but want to see if I can get extra boost in performance by adding ZFS.
After all this testing and poor results maybe its just better to stick with the original configuratio on the Areca RAID controller and run it on hardware RAID.
Last edited: