My new FreeNAS Box

Status
Not open for further replies.

proligde

Dabbler
Joined
Jan 29, 2014
Messages
21
Hi there!

I'm new to this forum (as a writer) but read a lot in the past. Now I wanted to give something back and wrote a summary (and took some pictures) of my FreeNAS hardware build process and what hardware I chose and for what reasons. It's a lengthy article and I'm not sure if you guys want to have the whole article posted here so I've put it in my blog. If you find it useful and want the content in this forum - just let me know and I'll be happy to put it here.

http://prolig.wordpress.com/2014/01/27/freenas-zfs/

Greets Proligde
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
Interesting article. I'm assuming with 5 drives you went with a RAIDZ1?
 

Durandal

Explorer
Joined
Nov 18, 2013
Messages
54
Nice build! I have the same CPU and i'm very pleased with the performance so far.

Have you done any performance testing with L2ARC/ZIL and without?
 

proligde

Dabbler
Joined
Jan 29, 2014
Messages
21
@KevinM yes, I did. I was very happy with standard RAID5 in the past despite articles like http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162 - I've had never any problems rebuilding even "large" > 10TB RAID5 arrays and I think these articles pursue a quite theoretical approach - the math seems correct but I guess they're using a little pessimistic URE rate. I'm not saving any data there anyone's life depends on so I sticked to the still a little better-than-RAID5 equivalent RaidZ1

@Durandal well sort of, yes. Have a look at the comment in the blog post and my answer to it. In my case it speeded things up a lot - unfortunately I have no qualified benchmarking data to prove it other than my n00by-copying-files-via cifs. With L2ARC/ZIL (on the same SSD) I max out my gigabit now no matter if sending or receiving. That wasn't the case before so I'll keep it :smile:

Regards Proligde
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
CIFS and iSCSI both default to async for their writes, so neither will touch the ZIL unless explicitly forced to on the dataset (sync=always). The ability to offload the metadata and checksum updates from the pool drives to SLOG is probably responsible for the performance increase.

You can SSH/console into the machine and run zilstat while you're putting it under load to see if it's actually having a major impact.
 

proligde

Dabbler
Joined
Jan 29, 2014
Messages
21
@HoneyBadger thanks for pointing out that this offloading might be the reason: Concerning cifs I was aware of the fact that it's using async writes by default (and I didn't change that). Immediately after adding the ZIL I already had a look on the pool watching "zpool iostat" with a script reloading it every second. As expected the ZIL is basically idle moving some kilobytes/sec at most. So based on what I saw in iostat the speed increase didn't make much sense to me, either but it was there. I can't say if it's realistic that offloading has such an impact on performance but I think it matches what I see here.

Apart from this the ZIL usage situation with my iSCSI is completely different: I mount the iSCSI device from an (as far as I can say) 100% default setup without explicitly forcing sync writes anywhere using a standard Ubuntu13.10 and the default open-iscsi package. I created an xfs on my iSCSI device (not sure if this is the optimal file system when using an underlying zfs) and mounted it. Compared to an default-out-of-the-box-config NFS (withouth ZIL) share I used before the performance is outstanding. I want to add that this comparison is totally unfair since I did the stupid change-two-things-at-the-same-time without benchmarking in between :-/

The interesting thing is: When putting load on my iSCSI share the ZIL is filling up (and emptying again) quite rapidly (so I thought there are a lot of sync writes "by default").

I hope I find the time to dive deeper into the specifics of zfs and its load behaviour with and without ZIL/L2ARC to provide some more reliable benchmarks. All I can say for now: Even though I use a single SSD for both ZIL and L2ARC the overall write performance increased a lot compared to a "blank" ZFS pool without log/l2arc even in scenarios where I didn't expect it.

Regards Proligde
 
Status
Not open for further replies.
Top