ZFS Noob
Contributor
- Joined
- Nov 27, 2013
- Messages
- 129
I run a small one-man company that hosts virtual machines. I'm looking at options for VM hosting so I have a plan when I outgrow my current solution.
In a perfect world I will find a solution that allows for dense storage, high IOPS, and clustering. In the real world I'm not sure this exists. The RedHat Storage Appliance can do it for lots and lots of money on an annual subscription. Open-E can do it for less money. Both of these require a pair of RAID controllers that can use some form of SSD caching, and that plus licensing fees drive the costs up considerably.
What I'm hoping is that I can get the performance and capacity I'm looking for using ZFS, and ideally I'll be able to use HAST to configure active/passive failover so hardware failure won't take down my VM cluster.
I'm pretty sure ZFS will work; I'm not sure about HAST though.
I'm hoping y'all can steer me in the right direction so hardware I buy for testing isn't wasted, and I start with reasonable expectations.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
So, with the background out of way, I have two questions:
1) How well does HAST work with NFS or iSCSI? I use BSD-based firewalls that use CARP and failover is < 2 seconds with it. I'm assuming failover can be about as fast with HAST/CARP, but I'm also thinking that the latency involved with synchronous writes to the slave can be painful. Can anyone quantify how painful?
Does it make more sense to have the "passive" partner just stay idle, to be used as a target to restore to if something goes wrong with the primary, or is HAST actually viable in this role?
2) I want y'all to tell me where my understanding is screwed up here re: hardware. From what I've read so far, it sounds like:
Thanks in advance.
In a perfect world I will find a solution that allows for dense storage, high IOPS, and clustering. In the real world I'm not sure this exists. The RedHat Storage Appliance can do it for lots and lots of money on an annual subscription. Open-E can do it for less money. Both of these require a pair of RAID controllers that can use some form of SSD caching, and that plus licensing fees drive the costs up considerably.
What I'm hoping is that I can get the performance and capacity I'm looking for using ZFS, and ideally I'll be able to use HAST to configure active/passive failover so hardware failure won't take down my VM cluster.
I'm pretty sure ZFS will work; I'm not sure about HAST though.
I'm hoping y'all can steer me in the right direction so hardware I buy for testing isn't wasted, and I start with reasonable expectations.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
So, with the background out of way, I have two questions:
1) How well does HAST work with NFS or iSCSI? I use BSD-based firewalls that use CARP and failover is < 2 seconds with it. I'm assuming failover can be about as fast with HAST/CARP, but I'm also thinking that the latency involved with synchronous writes to the slave can be painful. Can anyone quantify how painful?
Does it make more sense to have the "passive" partner just stay idle, to be used as a target to restore to if something goes wrong with the primary, or is HAST actually viable in this role?
2) I want y'all to tell me where my understanding is screwed up here re: hardware. From what I've read so far, it sounds like:
- I probably want lots of RAM. Is 64GB enough? We're talking a couple of dozen VMs, one third of which are running MySQL at ~ 80 queries per second on average. Sorry to be vague, but part of this is capacity planning, and that's just guesswork. Total database size right now is less than 20 gigs, so I'd think this would be plenty to cache the most frequently accessed files.
- An SLOG is going to be important with lots of RAM, so I'm assuming a pair of SSDs will be used to create a mirrored vdev, which can then be pointed to as the SLOG.
- At what point is an SSD for an L2ARC useful? I suppose I can go with what I think is "enough" memory and simply add one later if needed, but they're not that expensive in the grand scheme of things, especially since they don't need to be mirrored.
- I am assuming that the correct way to add storage will be to emulate RAID-10, so 2 2TB+ drives are added to the chassis, they are used to build a new vdev, and that is added to the Zpool. Am I correct here, so future expansion can be done 2 drives at a time and a RAID-10 workalike is maintained?
- Is it possible to configure hot spares?
Thanks in advance.