Help me not be an idiot

Status
Not open for further replies.

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
I run a small one-man company that hosts virtual machines. I'm looking at options for VM hosting so I have a plan when I outgrow my current solution.

In a perfect world I will find a solution that allows for dense storage, high IOPS, and clustering. In the real world I'm not sure this exists. The RedHat Storage Appliance can do it for lots and lots of money on an annual subscription. Open-E can do it for less money. Both of these require a pair of RAID controllers that can use some form of SSD caching, and that plus licensing fees drive the costs up considerably.

What I'm hoping is that I can get the performance and capacity I'm looking for using ZFS, and ideally I'll be able to use HAST to configure active/passive failover so hardware failure won't take down my VM cluster.

I'm pretty sure ZFS will work; I'm not sure about HAST though.

I'm hoping y'all can steer me in the right direction so hardware I buy for testing isn't wasted, and I start with reasonable expectations.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

So, with the background out of way, I have two questions:

1) How well does HAST work with NFS or iSCSI? I use BSD-based firewalls that use CARP and failover is < 2 seconds with it. I'm assuming failover can be about as fast with HAST/CARP, but I'm also thinking that the latency involved with synchronous writes to the slave can be painful. Can anyone quantify how painful?

Does it make more sense to have the "passive" partner just stay idle, to be used as a target to restore to if something goes wrong with the primary, or is HAST actually viable in this role?

2) I want y'all to tell me where my understanding is screwed up here re: hardware. From what I've read so far, it sounds like:
  • I probably want lots of RAM. Is 64GB enough? We're talking a couple of dozen VMs, one third of which are running MySQL at ~ 80 queries per second on average. Sorry to be vague, but part of this is capacity planning, and that's just guesswork. Total database size right now is less than 20 gigs, so I'd think this would be plenty to cache the most frequently accessed files.
  • An SLOG is going to be important with lots of RAM, so I'm assuming a pair of SSDs will be used to create a mirrored vdev, which can then be pointed to as the SLOG.
  • At what point is an SSD for an L2ARC useful? I suppose I can go with what I think is "enough" memory and simply add one later if needed, but they're not that expensive in the grand scheme of things, especially since they don't need to be mirrored.
  • I am assuming that the correct way to add storage will be to emulate RAID-10, so 2 2TB+ drives are added to the chassis, they are used to build a new vdev, and that is added to the Zpool. Am I correct here, so future expansion can be done 2 drives at a time and a RAID-10 workalike is maintained?
  • Is it possible to configure hot spares?
I feel like I'm on the right track, but I'm sure there's some confusion in here. Please point it out so I can have a better feel for what ZFS means as a possible solution.
Thanks in advance.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Not really. I repurposed a Dell R710 with 128G RAM, discovered I couldn't make it work with an SSD as configured, bought a 3rd party SSD adapter for the system as a work-around for testing, and ran into problems.

Testing showed wonderful performance with ZFS on cached data, but I wanted to use a dedicated SSD for the ZFS intent log and saw performance tank. It turns out the SSD adapter I bought doesn't work with FreeNAS, and even running the SSD as a one-device volume was only getting about 1% of the IOPS the SSD itself was capable of. There doesn't seem to be a known-good card I can use, so I need a better (and probably dedicated) hardware platform to implement and test ZFS properly. I was on the right track as far as building a performant box when I asked about it...

...Then I got really, really tired of dealing with cyberock or whatever his name is. It turns out I'm bad for the FreeNAS project because my SSD card offers performance < 1% of what the SSD should, and that's my fault, and apparently I'm not welcome. So F that guy.

I still feel FreeNAS would work very well in my environment, and I can work around failover with a proper snapshotting and backup strategy. But the emotional energy required to try and get rational discussions happening here just wasn't worth the effort.

It's still an outstanding project. Too bad that failover won't make it into the free version due to the business model, but doing a proper cost/benefit analysis shows that this is mostly a non-issue with proper planning. Odds are I'll end up building a dedicated FreeNAS box, testing it thoroughly, and migrating virtual machines to it. But I don't think I'll end up contributing much more on the forums. Too much effort for (at least in my case) misleading, insulting, and bad advice from the most prolific poster here.

Right now it's the forum staff that is the biggest flaw with the project, in my opinion.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You just replied to a ticket from Nov 27, 2013.

Edit: And yeah, notice that I stopped responding to your posts and suddenly you got no more responses. That's not a coincidence. You'd burned bridges with the other people and they just bailed on you. I was the last one to give up. You are welcome to blame me for your poor experience with the forum, but I'm definitely not the only poster around here and the fact that *nobody* else posted should have been a clue to the problem.

Good luck though. Despite the fact that I don't respond to your posts anymore I do wish you the best with FreeNAS.
 
Last edited:
Status
Not open for further replies.
Top