General Feedback and ZFS Optimization Advice Requested

Status
Not open for further replies.

Torsiatno

Cadet
Joined
Aug 6, 2017
Messages
1
Apologies for the length, I am long winded.

I started delving into FreeNAS and ZFS about a month ago or so, and in the meantime have learned a fair amount about both, I think. I'm hoping to get some general feedback about what I have setup, and would appreciate some input on how I might be able to optimize my system a little bit for better ARC performance, and any other suggestions you may have.

First, the build:

I've got an old Sunfire x2200 M2 that I've turned into my FreeNAS box. The case, motherboard, PSU's and such are default, so left off the list. I've added RAM and an HBA to set it up for FreeNAS, and the specs are as follows:

CPU: 2x AMD Opteron 2376(Quad core @ 2.3GHz)
RAM: 128GB DDR2 ECC(Thanks eBay!)
Boot: 32GB USB
Disks: 4x 1TB SATA
HBA: LSI 9201-16e

Soon I'm going to be adding 6 or so 2TB SATA drives, but I'm getting my general infrastructure setup first before spending the money.

My intent for this FreeNAS server is two-fold. The first is to use it as an actual NAS/SAN for my house. I'm going to be doing backups from other systems in my house to the NAS, using it for bulk storage that has disk redundancy, and other things traditionally done for home use of FreeNAS.

I'm also going to be using it as part of my homelab. I have another server, a re-purposed HP DL385 G6, running as an ESXi host, which is running 2x AMD Opteron 2427's(6 core, 2.2 GHz), and 80 GB of DDR2 ECC RAM. FreeNAS currently operates as the primary/only storage for that ESXi box, excluding the USB drive that ESXi is booting off of. FreeNAS has an NFS share for the ESXi datastore.

The 4 TB drives are currently in a RAID 10 configuration, with lz4 and dedup enabled on the dataset. I'm aware that dedup is not generally recommended, and I read extensively about associated requirements for dedup. I wanted to get an idea of how it would work in actual practice for my data. Additionally, I have the ability to migrate data off of the dataset and disable dedup if I end up needing that. Right now, I'm getting roughly twice as much data out of dedup, and have plenty of RAM available. Currently I'm using ~140GB of the dataset, out of the ~1.7TB size.

I have sync disabled. Everything on the dataset is backed up near constantly offsite, with versioning, and I also have consistent snapshots of the dataset. None of the information on there is so important that I can't lose a few or even several minutes of data, and the performance hit from the ZIL being on the hard drives was pretty bad. I have a cheaper 32GB SSD that I tried putting in to see how it would fare as a SLOG. It made the writes(VM's over NFS) significantly faster than it was, but was still only about 25% of what I can get with sync disabled.

I know that generally FreeNAS isn't very demanding from a CPU perspective, so I'm pretty sure I'm good from a hardware perspective for the server. My limiting factors are my 1 Gb network and the physical disks. Given the network bandwidth limitation, I chose to put this on a server with DDR2 memory, since I can get more for cheaper, and figured that having more RAM for my use case would far outperform faster RAM. Was that a correct conclusion?

I've got the FreeNAS configured to use 3 1Gb ports as a LAGG(LACP), connecting to my central switch, a Cisco 2960G.

I'm also looking for ways to better optimize my ARC. My hit ratio is fairly low, despite what I think is pretty consistent data access. Right now there are a a handful of VM's, and they have all been running continuously with relatively little change for a few days. If I spin up a new VM that is decently different, I see the ARC climb, so I know there is more room, but my hit ratio never seems to get above 80% or so. I have plenty of RAM available. Right now the cache is taking up about 80GB total, so I have more than 40GB to spare. I have verified that the tunable setting for the ARC is high enough; it's currently set to 120GB. I have the auto-tune enabled, and haven't changed things by hand(excluding changing the ARC limit from ~115 GB from autotune to 120GB, since I like rounder numbers).

What can I do to get a better hit ratio? I've read that >90% is desired, and that's what I was initially what I was getting when I first started deploying VM's.

zdb -DD output of dataset:
Code:
root@freenas:~ # zdb -DD -e main
DDT-sha256-zap-duplicate: 254161 entries, size 488 on disk, 157 in core
DDT-sha256-zap-unique: 2175474 entries, size 575 on disk, 185 in core

DDT histogram (aggregated over all DDTs):

bucket			  allocated					   referenced		  
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
	 1	2.07M	263G	122G	122G	2.07M	263G	122G	122G
	 2	 156K   19.5G   10.5G   10.6G	 405K   50.7G   27.2G   27.6G
	 4	84.6K   10.6G   6.87G   6.87G	 368K   45.9G   29.9G   29.9G
	 8	4.63K	589M	387M	387M	47.0K   5.84G   3.86G   3.86G
	16	1.41K	179M	123M	123M	28.9K   3.57G   2.44G   2.44G
	32	  802   99.1M   72.7M   72.8M	34.5K   4.27G   3.18G   3.18G
	64	  358   43.6M   41.8M   41.8M	33.9K   4.14G   3.97G   3.97G
   128	   86   9.23M   7.31M   7.32M	15.2K   1.62G   1.29G   1.29G
   256	   60   6.78M   5.67M   5.68M	20.4K   2.34G   1.96G   1.96G
   512	   19   2.29M   2.01M   2.01M	12.4K   1.51G   1.34G   1.34G
	1K	   33   4.12M   4.12M   4.12M	45.5K   5.69G   5.69G   5.69G
	2K	   16	  2M	  2M	  2M	51.7K   6.46G   6.46G   6.46G
	8K	   16	  2M	  2M	  2M	 202K   25.3G   25.3G   25.3G
   16K		8	  1M	  1M	  1M	 231K   28.9G   28.9G   28.9G
 Total	2.32M	294G	140G	140G	3.54M	449G	263G	264G

dedup = 1.88, compress = 1.70, copies = 1.00, dedup * compress / copies = 3.20



On a side note, if anyone knows what a Sunfire x2200 looks like, you may realize how it doesn't really lend itself to being a good NAS. It holds a whole 2 SATA 3.5 inch drives. I'm kind of "thrifty", and wasn't willing to part with the money for a different case or different server to hold more drives, especially with the cost of getting something that held a sufficient number 3.5 inch drives, which are my preference based on cost per GB.

I threw together a little MDF drive bay, that can hold 16 3.5 inch drives. The LSI 9201-16e uses cables that are SFF8088 on one end, splits into 4 SAS connectors. I had an old 750 watt PSU that I'm using to run the drives and fans for my hard drive enclosure. Jumped the pins on the 24 pin connector, and use the power switch on the PSU to turn the drives on and off. When it's actually finished, it'll have front and back doors on hignes, the cables will be managed nicely with that center channel, and it may have more than the current # of fans, but that's up in the air. It'll also probably be painted.

Initial testing of the setup:
e44blyPm.jpg


Some fans(120mm) added for cooling and some better cable organization:
5G2KOQwm.jpg
 
Status
Not open for further replies.
Top