Tuning Suggestions from article - Yays or Nays - Any real experience?

Status
Not open for further replies.

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Experts,

I saw this article written in 2013.
https://virtualexistenz.wordpress.c...vmware-nfs-datastores-a-real-life-love-story/

With the mantra 'No two environments or workloads are the same' in mind, does anyone care to comment on the advisement of this article from a technical or experience standpoint?


Has anyone tried any of these suggestions in their own environment?

Does anyone know that any of these tweaks are a terrible idea and will have negative consequences in FreeNAS?

Note: If you respond, can you please site a source rather than just blasting opinion. If you know it's a bad setting, why? If you have tried it before and it worked, try to explain how and why. Saying you like or dislike something without any reasons doesn't help anyone much...
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Experts,

I saw this article written in 2013.
https://virtualexistenz.wordpress.c...vmware-nfs-datastores-a-real-life-love-story/

With the mantra 'No two environments or workloads are the same' in mind, does anyone care to comment on the advisement of this article from a technical or experience standpoint?


Has anyone tried any of these suggestions in their own environment?

Does anyone know that any of these tweaks are a terrible idea and will have negative consequences in FreeNAS?

Note: If you respond, can you please site a source rather than just blasting opinion. If you know it's a bad setting, why? If you have tried it before and it worked, try to explain how and why. Saying you like or dislike something without any reasons doesn't help anyone much...
I guess I'll take a poke at this. I know you have specified that you don't want any input that doesn't cite some rigorous chapter and verse, but, I think you don't have much choice on this, because:

First and foremost, I'd like to quote from our official forum rules, which you can read by clicking the "FORUM RULES" link at the top:

Topics Likely To Go Unanswered:

Virtualization of FreeNAS on Type-1 Hypervisors such as ESXi, XenServer, and Hyper-V. In short, if you can't do this without help, you shouldn't be doing it.


We have extremely cogent reasons for this, and you can, at your leisure, familiarize yourself with them in a variety of forum posts in which people (especially Cyberjock) discuss why won't usually touch the topic of virtualized FreeNAS.

I will address some of his recommendations, however, as they are generally applicable, and could be considered to have little to do with virtualization really. It's a mixed bag. Let me cite 5 things he says. Two seem wise, two seem very unwise, and one seems like he has no right to have an opinion.

  • "Always use mirrors, never RAID-Z". Typically, the performance *IS* much better with mirrors. That's not a lie.
  • Record Size: a subtle question, the guy that posted this thing is not equipped to answer it. Do your own research.
  • "Do not use Dedupe". Sage advice. Dedupe is excessively stupid to use for 99.1% of users.
  • "Enable Jumbo Frames". LOL. You may read up on how silly of an idea this is in this forum at your leisure.
  • "Use LZJB for compression". Yes, if you would like low performance, and low compression, that sounds like a great idea. (either use "no" compression, or use the default lz4).
  • OK, I guess let's go for 6. "Update atime on read: disable". That's fine. You probably will improve performance for certain workloads with this, at the expense, of course, of not having updated atimes.
As for the balance of the article, it seems to me to be mostly questionable. I am not inclined to cite chapter and verse for my advice, as the questions are well-addressed already throughout the forum, some of which by people that are much smarter than I am on it. Also, the author of the article has exceeded, by a substantial amount, the amount of grammar and spelling errors that I will tolerate before I stop reading.

I would counsel you to more or less ignore the article, as each question can be thoroughly ascertained by existing forum posts in this forum from power users who have more than adequately established their bonafides with FreeNAS and ZFS.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Can't cite much either. But I'd view it like this:

  • This is a DATASTORE not virtualizing FreeNAS. The virtualization rules are not relevant.
  • Dude has been on ZFS since 2009. MUCH longer than most, and tests rigorously. However, some of this is OLD and things have gotten better.
  • If you read the comments. He worked with jgreco, @mav, pbucher, and even cyberjock while tuning in this thread. These are heavyweights.
  • This is legit experience talking. Not BS. Well tested... and only cited for his environment. Take that into account for jumbo frames and assume he knows his gear and has tested it.
  • There are multiple platforms at play.
  • Most of this tuning is pretty standard. Use mirrors. Tons of RAM. L2ARC, Grab suitably designed, and fast SLOG for sync NFS. (Jumbo and Record Size might be more specific to his gear and workloads.)
  • There is one subtle neat bit with the write bias: "throughput" for workloads that are random and waste ARC / L2ARC. This is clever, insightful, and not mentioned often.
  • The grammar and spelling I'll give him a pass on as it is obviously one of many languages.

All in all the guy is running large / fast production systems successfully, and for a long time. But the basics and generic performance advice you see here are often more relevant to everyday systems. Also moving iscsi to the kernel and getting many of the offloading primitives functioning is a game changer for VM workloads.

Two bits. There really is no single answer or recipe.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
mjws00,

I think you captured more of the spirit of the post, thanks. There seems to be a tenancy to go negative quickly in this forum for some reason, which I guess I can understand if you constantly have new people asking the same questions (without reading/searching) or wanting one magic solution to make their half-hearted system instantly become a 100K+ value SAN. Goofing around on a test system looking for subtle ways to improve your workload is half the fun (as long as you have time).

Can't cite much either. But I'd view it like this:
  • If you read the comments. He worked with jgreco, @mav, pbucher, and even cyberjock while tuning in this thread. These are heavyweights.
Thanks for citing this, I can tell it's worth reading a few times.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Can't cite much either. But I'd view it like this:
  • There is one subtle neat bit with the write bias: "throughput" for workloads that are random and waste ARC / L2ARC. This is clever, insightful, and not mentioned often.
I did some forum searching for this and didn't find anything. I'll broaden the search to other sites and try to get more educated what it's for and how it should or shouldnt be used.

I wonder if anyone has experience with changing this setting for their environment and why?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Since I'm sure you want my input, here's my input.

ZFS tuning is like trying to solve world hunger, but 1/2 the people that are hungry don't want what you are serving.

It's a pain to do. It's incredibly complicated. Even for me, when tasked with tuning a system for a specific function, I can do a few of the more "gross" changes, but when it comes time to tune in precise values, it is much less ambiguous than anyone will realize. Often tuning is about making it "good enough" and not making it as efficient as possible. And then when it's no longer "good enough" you look at the time it will take to either get more efficiency by doing more tuning, or simply spending money to by more hardware (or even just adding a second server). Whichever of those two options is the easier to solve is the one you should do. For the vast majority of people, having a single stupidly large server isn't as cost-effective as having two or three smaller servers. You can spend stupid amounts of time trying to optimize ZFS, and you may not see much of an improvement if you are near the limits of the hardware.

Just looking at the stuff in your sig, you've got 32GB of RAM. Just like the FreeNAS manual says, if ZFS is slow the single best way to increase performance is to add more RAM. That is, and will always be true, until you get to stupidly expensive quantities of RAM.

I am lucky to have helped someone build two identical FreeNAS machines that had 256GB of RAM, 1TB of L2ARC, and with their setup they are beyond thrilled with their performance. One of his competitors was quite unhappy when they tried to build a system, spent way more money on the box than he did (like almost double because they didn't spend money in the right places), and performance was much much lower. His competitor called me asking if I could help tune the system. ;)

But long before you start trying to do tuning and stuff, you should definitely be spending money on more RAM. 32GB is basically a joke if performance matters and you aren't using the system for home use. Even with FreeNAS certified systems from iXsystems, the minimum amount of RAM you can buy is 64GB.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Just looking at the stuff in your sig, you've got 32GB of RAM. Just like the FreeNAS manual says, if ZFS is slow the single best way to increase performance is to add more RAM. That is, and will always be true, until you get to stupidly expensive quantities of RAM.

But long before you start trying to do tuning and stuff, you should definitely be spending money on more RAM. 32GB is basically a joke if performance matters and you aren't using the system for home use. Even with FreeNAS certified systems from iXsystems, the minimum amount of RAM you can buy is 64GB.
I wasn't posting the question for tuning the system I'm running now, mostly for ZFS education purposes. I do agree completely though. That signature platform is being replaced by a SuperMicro based head, 72GB RAM, M1015, Intel 3500 SLOG, etc. I'll just be moving the disk enclosure and all the crappy disks over. They'll stick around while we get some 6G disks.

In the above cited post, there's a lot of very good discussion and performance evaluation for their respective workloads that was hugely enlightening for going about things in a more effective manner than throw hardware at a problem, or try to tweak something to make crap become amazing. It showed good hardware, good process, and a lot of tweaks. Those tweaks may or may not be necessary for everyone, but I find it difficult to research some of the ZFS parameters when you don't know much about them. They aren't documented well in FreeNAS because 'The defaults work for most people' - quoting you and lots of other people.

Anyway, I am really interested in these few systems and the people who have maintained them.

If the SLOG can out perform the pool, and there's a large amount of RAM allowing the txg to be too large for the pool to keep up with, it's hard to learn how to use the possible settings for tuning performance without knowing how they work together. Google (other ZFS explainations) just says what each setting does in some cases, not how to PROPERLY use a setting, or when you should or shouldn't.

That's the conversations I find very interesting.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Experts,

I saw this article written in 2013.
https://virtualexistenz.wordpress.c...vmware-nfs-datastores-a-real-life-love-story/

With the mantra 'No two environments or workloads are the same' in mind, does anyone care to comment on the advisement of this article from a technical or experience standpoint?


Has anyone tried any of these suggestions in their own environment?

Does anyone know that any of these tweaks are a terrible idea and will have negative consequences in FreeNAS?

Note: If you respond, can you please site a source rather than just blasting opinion. If you know it's a bad setting, why? If you have tried it before and it worked, try to explain how and why. Saying you like or dislike something without any reasons doesn't help anyone much...

Regarding Intel's 10G NICs on FreeNAS 9.3 - see jpaetzel's comment #6 here: https://bugs.freenas.org/issues/10306
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Interesting read on the intel 10gb links. It seems like using a Chelsio S320E-CR would almost be a direct replacement and for a bargain!
 
Status
Not open for further replies.
Top