Running ZFS with <8GB RAM on 9.2 Does Work Well

Status
Not open for further replies.

wiebsomean

Cadet
Joined
Dec 22, 2013
Messages
2
There are a lot of forum threads relating to poorer performance running FreeNAS 9.2 versus 8.x, particularly when running ZFS with <8GB RAM. Certainly I experienced woeful performance running 9.2 with a ‘stock install’ on 4GB RAM, but my real world experience suggests FreeNAS 9.2 can be nearly as fast as 8.x with a little prudent tuning, and perhaps a cheap hardware upgrade.

I thought I should share my experience in the hope that it will help others to live a faster life with an under-specced rig and reap all the benefits of the latest build of ZFS with out forking out a lot of cash.

Before my settings were implemented, I was getting only 11- 15MB/s read and write performance from my rig over GigE with just the standard autotune sysctls and tunables set. After applying the settings below, I saw reliable 65 - 70MB/s throughput and rock steady stability.

Firstly let me affirm the oft quoted statement that FreeNAS 9.2 needs a minimum of 8GB RAM, 16GB recommended, with the minimum spec being clearly set out in the official documentation here: http://doc.freenas.org/index.php/Hardware_Recommendations

My post isn't a recommendation to run ZFS with less than the minimum recommended RAM, merely a recipe to explain how its possible should you so wish. There are undoubtedly some stability issues which some users will experience if they choose to implement this recipe, but at least performance should be acceptable for a small home LAN type scenario.

So here goes.

My Rig:
  • Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
  • Gigabyte Mobo supporting max 4GB DDR3 RAM & Broadcom chipped Gigabit Ethernet on-board
  • 4074MB RAM
  • Super cheap Intel PCI GigE NIC
  • 2 x 2TB + 2 x 1TB HDD drives in a single RAIDZ1 vdev
I'm running NFS and CIFS (to my Sonos media streamer), however i've only tested performance via NFS to my Macs.

Step 1: Trade up to an Intel branded NIC. This single change fixed read performance by some 50MB/s without making any other config changes at all. Mine cost about £25 about 6 years ago, and all I had to do was rip it out of a rusting hulk in the attic, insert and reboot, then configure FreeNAS to route all data through the PCI NIC. PCI (not PCIe) supports Gigabit speed data transfer, particularly if you only have one PCI card in your machine. PCIe will clearly run a GigE NIC, but this might be overkill if you’re low on slots.

Modern Z87 and Z77 mobos contain Intel chipped on-board NICs, and i’m sure lots of older ones do too, so this step may not be necessary in your case, but if you have such a modern mobo, it probably supports a lot of RAM, so upgrade that instead.

Step 2: Modify some auto-tune sysctl settings as follows:

Code:
kern.ipc.maxsockbuf = 4194304
Code:
net.inet.tcp.recvbuf_max = 16777216
net.inet.tcp.sendbuf_max = 16777216


Step 3: Add the following sysctl settings:

Code:
net.inet.tcp.delayed_ack = 1


There’s some debate about this one, but my iostat output clearly indicated a write improvement of about 5MB/s after I set this sysctl.

Step 4: Add the following tunables (loader.conf) settings

Code:
vfs.zfs.prefetch_disable = 0
Code:
vfs.zfs.txg.timeout = 5
vfs.zfs.zio.use_uma = 1
vfs.zfs.write_limit_override = 268435456


This last one fixed write performance in line with the move to an intel NIC, i.e. i gained 50MB/s

Step 4.5: Don’t adjust any kmem_size tunables created by auto tune. These are core to how FreeBSD manages kernel memory, and adjusting them will likely make your system unstable.

Step 5: Reboot - you need to reboot to make any tunable ‘take’, however sysctls will set themselves immediately after clicking OK.

Step 6: Do some performance testing - its important to measure what performance improvement you've achieved so that you feel validated in sticking with FreeNAS 9.2. The best way to do this is to open up the Shell and run the command below whilst copying some files to and from your FreeNAS box from a decently specced PC.

Code:
 zpool iostat PrimaryData 1 


My zpool is named PrimaryData, you need to change value this to whatever you’ve named your pool.

Footnote:

My hard drive arrangement is I admit a bit odd, and definitely not recommended. It arose out of a few HDD failures which I decided to replace with larger drives. Over time, i'll replace the older 1TB drives with matched 2TB HDDs, and end up with more available disk space and hopefully better performance overall. The beauty of ZFS is that re-silvering is a doddle.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I think I should include this disclaimer: These settings are for 9.2.0. Any update to FreeNAS could change some other default tunables/sysctls resulting in different performance or stability problems. Use at your own risk.

Good writeup. Not sure why you included vfs.zfs.txg.timeout=5 as that's the default anyway, but ok. That write limit value will change depending on your disks and SATA controller, so again not something people can just blindly copy and expect good results from(hey, they don't call it the ZFS evil tuning guide for nothing!).

Definitely interesting to see how people try to make do with less hardware. But the fact of the matter is that I don't really care too much about performance and I'm far more concerned with the whole "kernel panics due to insufficient RAM, you reboot and your pool is forever trashed/gone". We just had one(or two.. I forget) of those 2 days ago. The bigger concern for me is that you can have no hardware failure at all and still lose a pool while having full redundancy the entire time. Not a good position to be in, nor is it a condition people thing is possible. Which is why its so hard to explain to people when they ask why their 3GB of RAM was good enough for so long, and then suddenly they wake up and their worst nightmare is happening to them.

Yeah, that whole "reliablility" comment is a farse IMO. 9.2.0 has only been out for 2 weeks, so that's far from enough(in my opinion) to make the claim its reliable. The problem is that:

1. We don't know what exactly is going wrong. ZFS is supposed to discard incomplete transactions making this whole thing impossible. This alone make proving something "safe" as damn near impossible. Even the 8GB of RAM limit was set based on the croudsourced feedback from forum users over two years!
2. We've had people that have daily kernel panics from insufficient RAM, they hit the reset button and have no ill consequences.
3. We've had people that had a single kernel panic from insufficient RAM and their pool never worked again.
4. We've had combinations of #2 and #3, some lasting a day or two, others go a year or more before losing data.

The only thing I hate with threads like these is that it makes it sound like its acceptable to ignore the minimum requirements, which people do like its nothing hourly around here. The fastest way to get me to ignore you and your threads forever is to have less than the minimum requirements. If you can't be "inconvenienced" with meeting the system requirements(which I updated) then why would you want to listen to me now? You didn't listen to me before. Why would I waste my time on you a second time?

Anyway, I won't delete it despite really wanting to. I have no doubt it'll turn into a big long discussion that will get heated and some admin will ultimately lock the thread because its going to turn into a place for all the manual-ignoring requirements-avoiding people will meet to discuss how to "stick it to the man". It'll just be a rallying cry for those that refuse to meet the recommended minimums. And nothing good can ultimately come from that.

It's a very thin line between choosing to ignore something and flat out endorsing it. We had some discussion with one of the people that lost their data 2 days ago because we haven't hard coded FreeNAS to refuse to boot without 8GB of RAM. His argument is that he had no idea that what he was doing was dangerous, and he had been using the system that way for more than a year if I remember correctly. He had a single panic after more than a year of operation and that was all it took to cost him his data.
 

wiebsomean

Cadet
Joined
Dec 22, 2013
Messages
2
CyberJock, your warnings are well heard indeed, and it saddens me that you have to repeat them so often in so many threads.

I myself am comfortable to run the risk of kernel panics as I have a reliable cloud back-up solution which, I've tested several times, does indeed restore my data when required. Its also less than a tenner every month.

Entrusting any data to a single storage device is never wise, so I echo your warning and implore any that follow the recipe to ensure they have an adequate back up solution that has actually been tested. Cloud is a convenient way to ensure that offsite data protection is in place - my wife would simply kill me if the photos and home videos disappeared forever.

- wiebsomean
 
Status
Not open for further replies.
Top