Looking for options with my existing SSDs

Status
Not open for further replies.

bfishernc

Dabbler
Joined
Jun 29, 2012
Messages
30
Hi all, new user. Very pleased so far with my setup.

I have a home lab running 2 vSphere hosts and a seperate FreeNAS box. This is an upgrade from 1 host running everything. I run View (virtual desktops) so speed is a priority.

In the past (1 host), I used local SSDs to boost performance (they worked awesome). Now that I have FreeNAS running, I have to decide how to best use these SSDs.

I have 2x64GB and 1x128GB drives (standard consumer SSDs, not the very expensive ones).

My options (as I see them... obviously looking for opinions/other ideas):
1) Create a pool of SSD storage, and let vSphere's Storage DRS work (automatically moves storage to proper tier based on historical demands).

2) Keep a 64GB SSD in each host for host cache (overcommit RAM memory) and for "emergency use" (need to move VMs to the host for FreeNAS maintenance, etc). The 128 would be left in 1 of the hosts for the View replicas (makes desktops faster, but eliminates vmotion for them, etc)

3) make a mirror pool of 64GB for the ZIL, and use the 128GB for the L2ARC cache.

My preference I think is #3 (make this box fast enough that I don't need local SSDs in the hosts). My concern is I've read "not to use the cheap SSDs for the ZIL since you'll wear them out". Is there any truth to that? They are rated for 1M hours (100 years). I realize the ZIL will be busier than a normal PC so 1M hours may not be realistic... but what about 5-10 years?

Any and all thoughts appreciated :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hey, interesting. I keep meaning to play with View, but am put off by the enterprise-y sized intended deployment, there doesn't seem to be a clear and concise guide to small deployments.

Anyways. We're a 4.1 shop so I haven't seen Storage DRS in action. It doesn't strike me that most two-host deployments would have any advantage from automatic storage placement unless the hosts were massive with tons of VM's. If you're just looking to experiment and learn, it seems like it'd be less expensive to deploy several pools of cheap spinning rust for that purpose and find something valuable for your SSD's to do. On the other hand, you're not talking a large dollar investment in SSD's. You're the only one who can make that determination as to value though.

I also haven't seen View, my impression was that it offered some sort of desktop cloning, so I'm guessing that's what "View replicas" are. We don't have any real use for that here so I never followed up on that feature. The 64's as host cache are probably reasonable, and well-sized for the task unless you have a busy environment. Our VM's tend to be on the small side so we don't bother with host cache here, at least right now.

Which leads me to 3). 128GB for L2ARC is a nice-sized L2ARC and will probably hold "everything of interest plus some". You haven't really described what your NAS setup looks like sizewise, or how busy you expect your setup to be, but 128GB is large enough to cover any home lab setup I can reasonably imagine. Make sure you have sufficient memory! See http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34674.html about expected size, you need about 3GB of memory ABOVE AND BEYOND NORMAL to support that size L2ARC. You probably won't kill your L2ARC with IOPS or cycles either, at that size.

As for the ZIL, the important part to remember is this: ZFS is going to be committing changes, generally sync writes such as metadata changes, to the ZIL. Those that aren't getting simultaneously pushed out to your main storage pool will get spooled out to the storage pool on an as-available basis, and here's the purpose of the ZIL: if your host crashes before the storage pool is updated, you have a copy of ZFS's intended changes to your pool... so suddenly the name "ZIL" should make sense. The loss of the ZIL can potentially render a pool unusable, especially with ZFSv15, so that's one of the driving factors in the mirror advice. More gory details can be found summarized here, http://hardforum.com/showthread.php?t=1591181 which seems to be a fairly interesting discussion of various factors and strategies.

You can use cheap SSD's for ZIL if you mirror them. What you worry about with SSD's is the number of write cycles that each block incurs, and the controller works very hard on your behalf to make that problem just magically work out. But after you've written 60GB to the ZIL, you can think of it as the blocks have been written once :smile: If you're writing out tons of sync writes, your ZIL will be hit *hard*. If you're writing out mostly async writes, your ZIL won't be involved. Solution: figure out how to keep your ZIL out of the equation by minimizing sync writes. For a home lab setup, I don't think you're likely to be stressing it hard enough that either disk of a mirrored ZIL would fail quickly due to excessive cycles, unless you're doing things that push sync writes really hard - just don't do that! What you really want to guard against is primarily going to be flash crash, where the device just suddenly decides to go south and take your data with it. If you read anything at all about SSD's, you'll know that they're just as fallible as hard drives, but when they go, they often just completely go.

On the other hand, be aware that ZFS can be pretty zippy even without a ZIL. Many people report being able to get perfectly acceptable performance without a ZIL at sub-gigabit speeds, which I expect is largely dependent on workload and tuning - I seem to have a knack for generating pathological ZFS workloads, I guess, because I can cause great stress for a system even with a ZIL and all that without much trouble. :smile:
 

bfishernc

Dabbler
Joined
Jun 29, 2012
Messages
30
Great feedback, thanks

I LOVE View (disclaimer - I am a VMW sales rep). I have been running it for several years - my kids are homeschooled, wife, myself, several coworkers, etc. I love how easy and powerful it is. I love having multiple desktops I can access anywhere (home, laptops, zero clients, customers office, iPad, hotel business centers, etc). I love how easy it is to fix my kids screwups! LOL

But because I'm running desktops - speed is important. .5 second lag is very noticeable on a desktop. I do push these desktops pretty hard - kids playing full screen YouTube videos and games, daily office work (much more stress than most server VMs), playing with all the latest software

I haven't played enough with my new FreeNAS server to see if it's sufficient (in fact, I'm in process today of upgrading to vSphere 5.0U1 and View 5.1). It appears to be somewhere between my past RAID (on my host) and my SSDs in performance. View 5.1 supports host block-level caching which will dramatically reduce IOPs on the storage, so I suspect I may be fine without SSD enhancements... but since I have them, I figured it might be a great performance enhancer :) Still leaning towards #3 - having 2 SSDs in mirror makes me feel pretty comfortable with them... and IF I lost the L2ARC SSD, not much bad would happen.

Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Depending on your hardware specifics, the host-based VMware datastores can be butt-slow or very fast (or anywhere inbetween), as VMware defaults to write sync goodness. I'm sure you've got a handle on that, as a VMware sales rep. The nature of the beast and all that. Turning on the write cache for non-BBU RAID controllers is an option (#2001946 etc) in some scenarios, others, you're just stuck. I've found the most useful configuration to be something like an M1015 (LSI) with SSD's in RAID1, gives you fast without too expensive, assuming you don't need lots of disk space. There's no way to touch this with FreeNAS, short of going 10GE, though we've had FreeNAS VM's running on a host that can then deliver massive network throughput to the host...

But FreeNAS is just as much of a wildcard. For example, I've got some older Opteron 240EE storage servers (~2005 vintage, 8GB, 4 modern 5400RPM 2TB drives in RAIDZ2) that have trouble delivering more than maybe 10-20 megabits(?) when using ZFS, especially writing, but can ram lots of data around on UFS all day long. Or the new Xeon E3-1230 with 32GB and 4 modern 7200RPM 4TB drives that can do 700-800Mbps of iSCSI and pretty much fill a gigE with some other protocols. A single E3-1230 core with 512MB and a pair of SSD's feels about infinitely fast, running as a VM and being accessed via its host... but across a gigE it slows down a bit. :smile:

From a design perspective, host-based datastores kind of really suck unless you really need them for a specific reason, and a lot of VMware's value is wasted if you can't migrate machines around easily. I've been leaning in the direction of using cheap, low-watt NAS appliances in combination with FreeNAS to provide a mixture of storage... FreeNAS has a real hard time really addressing some of the problems we need addressed here, including that we need to design against avoidable single points of failure, which means multiple independent NAS units. FreeNAS is awesome at providing large storage pools, but the entire system can get bogged under heavy workload. So right now we're kind of working towards replacing host datastores with low-watt NAS appliances. We're finding that even if they're only able to handle 200-300Mbit/s, that's actually OK because the cheap NAS can be had for as little as $100/unit, AND they're super-low-power compared to conventional iSCSI gear. Well, anyways, that's a bit off topic I guess.

So, curious, is there actually a way to do VMware View that is suitable for "home" or "small business" use? That is, to say, that doesn't require a several thousand dollar many seat license that's clearly intended for larger offices or enterprises? Quite frankly, it's a little frustrating to have a lot of ESXi capacity available but to be running VMware Fusion to run the few stupid DOS apps we need ... like, ha, ha, VMware vSphere Client. :smile:
 

bfishernc

Dabbler
Joined
Jun 29, 2012
Messages
30
"From a design perspective, host-based datastores kind of really suck unless you really need them for a specific reason, and a lot of VMware's value is wasted if you can't migrate machines around easily."

Absolutely true, and the reason for my current project :). I'm upgrading my system this weekend to vS5U1 and View 5.1 (the host cache dramatically reduces iop demand)

unfortunately, no, there is no "home" or "personal" View options. The lowest cost option is a 10 user bundle. If you have excess vSphere capacity, you can use View Add-ons (less expensive, doesn't include vSphere or vCenter). Sorry, wish I knew of a program I could help you out with.

Regarding vSphere Client - we now have a browser version. Covers a lot of functionality but not everything yet (not all plugins work, etc)... But we are continuing to improve it until it can cover all functionality. Makes a lot of sense and future previews I've seen look great.

Working on cleaning off my SSDs - will probably try the L2ARC cache first (lowest risk). Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Actually, I wasn't aware that you could actually use an existing cluster, I'm pretty sure I had seen something at some point in the past that View was supposed to be run on its own cluster, and while most "rules" can be bent or broken by the experienced, it wasn't seeming like it was compellingly attractive enough to pay for it just to see if I was clever enough to figure out how to make that work. That's very interesting.
 

bfishernc

Dabbler
Joined
Jun 29, 2012
Messages
30
Best practice for a large implementation is to have it on a separate cluster. Also, if you buy the View bundles (rather than add-ons), you must use a separate cluster (since vSphere Ent Plus and vCenter is included). Add-ons and bundles can't be mixed.

So if you have a small number of users and have excess vSphere capacity, using add-ons is a good approach.
 

bfishernc

Dabbler
Joined
Jun 29, 2012
Messages
30
I came up with option 4

4) Buy 1 more 60GB SSD and make a ZFS pool of 3 SSDs. Use this for my priority VMs, and use my drives pool for secondary VMs.

Not sure if using these for the ZIL and Cache would help me more...
 
Status
Not open for further replies.
Top