FreeNAS for VMWare View Desktops

Status
Not open for further replies.

SJD

Cadet
Joined
Sep 24, 2013
Messages
2
So I'm completely new to FreeNAS and other than some exposure to the community edition of Nexenta, I haven't touched a Unix variant in 15 years. It's simply a result of being in senior management and having every bit of technical expertise sucked right out of my head through budget spreadsheets and strategic plans. I embarked upon this task because I have come to loathe EMC. They make a great product, and it works wonderfully well, but we have over 100k tied up in EMC and to expand our storage to meet our needs over the next 2-4 years will cost us another 100k. I'm also proving to my technical team that the boss still has some skills left and isn't simple a pencil pusher. (This is the fun part of the exercise)

What I am wanting to build is a high performance datastore that will host our VMWare View desktops for significantly less than the cost up expanding our EMC SAN. I think we may have something that could work, but wanted the community opinion. The system specs are below, and the screen shot shows two tests from running iozone (the first was for 10GB and the second for 40).

what do you think? Can this handle 80 typical windows desktops running in View? What other tests would give me a better feel for the systems capabilities?

1 Dual Sockets Server Board X9DRi-F
2 Intel Xeon E5-2609 4-Core 2.4 GHz 10 MB LGA2011
8 4GB 1600MHz DDR3 RDIMM ECC [Total 32GB]
4 Crucial M500 480GB 2.5" SATA 6Gb/s
8 Seagate Constellation ES.3 3TB 3.5" SATA 6Gb/s 7200RPM Hard Drive
1 LSI Nytro MegaRAID 8100-4I RAID Controller with 100GB SLC NAND Flash
1 Intel X540-T2 10GbE 2-Port RJ45 Copper Card

We're using the on board SLC SSD's in the MegaRAID for the Slog device and the four crucial SSD's for 1.7TB of L2ARC (This part is ridiculous I know)
6 of the 3TB drives are setup in raidz2 and we have two spares at the moment. (I wanted to put all 8 in the raid, but freenas barked that it was not an optimal configuration so they are setup this way for testing).

THanks

Steve
nasperf.jpg
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, you need to stop and breathe. Kudos to you trying to prove you aren't still a pencil pusher. First though... realize that you might actually be a pencil pusher now :(.

Then:

1. Get ALOT more RAM. Don't do less than 8GB sticks, and maybe not less than 16GB sticks. If you do smaller sticks you'll be looking at replacing them with bigger sticks later for SURE. I wouldn't do less than 64GB of RAM for starters if you want to be able to realistically tune ZFS in your lifetime. If you have ZFS tuning experience, you might be able to get by with 32GB. Remember, ZFS eats RAM for breakfast. RAM is your "cheap" method to performance improvement. So don't go overboard with L2ARC and ZIL space as they only help so much, and only with certain workloads. 10GB of ZIL/SLOG is a cubic boatload, so if you plan to go with a larger SSD for any reason partition off nothing more 5GB or so. More won't help.
2. You need RAM for the L2ARC index. Typical you shouldn't exceed a 5:1 ratio(if I'm remembering correctly.. kind of sleepy), so you are looking at some serious RAM if you want 1. 7TB of L2ARC. And remember you'll need even more than that for the RAM caching. If I remember correctly its something like 400 bytes per 4k byte entry, so do the math and you'll figure out that your total RAM won't even be able to index an L2ARC of that size.
3. Your choice of RAID controllers is non-standard for FreeNAS. So make sure they are compatible with FreeBSD/FreeNAS before buying them. Most people use M1015s, 8 SAS 6Gb ports for $100 is a steal!
4. Intel NICs are the best, but some of the absolute latest don't work with FreeNAS yet. So verify that the driver version is compatible with your NIC before purchasing it.
5. If you can do 2 vdevs of 6 drives that might be faster than a single vdev. Mirrors are also a very good way to go.

Be careful about throwing more hardware at the problem. More hardware in the right places is good, but all too often ZFS newbies throw it in the wrong place, get frustrated, then give up on ZFS.

UFS is a very reasonable option if you really want high speed for ESXi. Keep in mind you don't get the data reliability, but that's the tradeoff in the world.

Good luck!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1 Dual Sockets Server Board X9DRi-F

only gigE ... for $400.

2 Intel Xeon E5-2609 4-Core 2.4 GHz 10 MB LGA2011

the second-most sluggish of the E5's.

8 4GB 1600MHz DDR3 RDIMM ECC [Total 32GB]

just ... wtf.

4 Crucial M500 480GB 2.5" SATA 6Gb/s
8 Seagate Constellation ES.3 3TB 3.5" SATA 6Gb/s 7200RPM Hard Drive
1 LSI Nytro MegaRAID 8100-4I RAID Controller with 100GB SLC NAND Flash
1 Intel X540-T2 10GbE 2-Port RJ45 Copper Card

and more ... wtf.

We're using the on board SLC SSD's in the MegaRAID for the Slog device and the four crucial SSD's for 1.7TB of L2ARC (This part is ridiculous I know)
6 of the 3TB drives are setup in raidz2 and we have two spares at the moment. (I wanted to put all 8 in the raid, but freenas barked that it was not an optimal configuration so they are setup this way for testing).

Okay, really, question one is why you didn't just get something like the X9DR7-TF+. Gorgeous board. Not quite FreeNAS-perfect but pretty close. For $700ish, you get a board that outguns that X9DRi and includes dual 10GbE and an LSI2208; $200 more for a BBU and you can make a heck of a SLOG ZIL device with it...

The E5-2609 is the slowest CPU known to man. Except possibly the E5-2603. Seriously, I got one sitting around here that was a placeholder in my toy box, which is in fact an X9DR7-TF+ with an E5-2697v2, 128GB of RAM, and a dozen 4TB drives in a 24 drive chassis. Glass half full? Or half empty? Who can tell.

But for a pure NAS, I'd get an E5-2637v2, and if circumstances called for it, a second one. You won't need massive numbers of cores - four may be plenty, but core speed and lots of RAM are the things that will make ZFS fly.

But really you should pump the machine full of memory. 128GB of M393B2G70BH0-CK0 is about $1200 currently. To support 1.7TB of L2ARC you should have at least that, and preferably more like 192GB of RAM.

Basically if it was *me*, I'd put together:

$700 X9DR7-TF+
$2600 2 x E5-2637v2
$2400 256GB RAM
$1000 nice chassis
$1400 4 x Crucial M500 480GB
$2800 8 x Constellation ES.3 4TB
$300 LSI2008 HBA for ZFS happiness
$200 2 x Random MLC SSD for use with the 2208 as SLOG.
--------
$12,000 (rounding up)
 

Nathan

Cadet
Joined
Sep 9, 2013
Messages
6
$2600 on processors just for storage seems to like overkill to me..

I also don't really see the point of the LSI2008 on the motherboard either... he could save the money and get the X9DRH-iTF.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
$2600 on processors just for storage seems to like overkill to me..

Yeah, see, the real problem here is that at the end of the day, a lot of FreeNAS performance issues are directly related to clock performance and memory.

So you tell me which is a smarter choice here:

1) 1 x E5-2637v2 at $1300
2) 8 x HMT84GL7MMR4A-H9 (256GB) at $1000 each
Total: $9500

1) 2 x E5-2637v2 at $1300 each
2) 16 x M393B2G70BH0-CK0 (256GB) at $150 each
Total: $5000

Hint: I think my way is smarter, because not only do you get DDR3-1600 instead of a DDR3-1333 part, but you also get four more cores, and you do it for a bit more than half the price. It is completely possible that the four extra cores will never be used, but there is also less worrying that route.

I also don't really see the point of the LSI2008 on the motherboard either... he could save the money and get the X9DRH-iTF.

That's a remarkably poor choice; LSI2008's are cheap and easy to add on. Picking a board that offers an LSI2208, on the other hand, is a clever choice, because the cost of a 9265-8i is about $600 all by itself, and it turns out that you can add a BBU inexpensively and then abuse the controller as an extremely fast SLOG ZIL device.

Now the thing is, since we're talking about a replacement for a ONE HUNDRED THOUSAND DOLLAR vendor supplied option, I'm making the assumption that going high end on the FreeNAS box isn't really that awful, and ESXi with a large active userbase could be very stressy on a machine.
 

Nathan

Cadet
Joined
Sep 9, 2013
Messages
6
Hint: I think my way is smarter, because not only do you get DDR3-1600 instead of a DDR3-1333 part, but you also get four more cores, and you do it for a bit more than half the price. It is completely possible that the four extra cores will never be used, but there is also less worrying that route.

I never said to not go with an Ivy Bridge E. You mention the benefit of additional cores but completely disregard the large price/performance advantage that a E5-2650v2 offers over the 2637?
That's a remarkably poor choice; LSI2008's are cheap and easy to add on. Picking a board that offers an LSI2208, on the other hand, is a clever choice, because the cost of a 9265-8i is about $600 all by itself, and it turns out that you can add a BBU inexpensively and then abuse the controller as an extremely fast SLOG ZIL device.

The 2208 on that motherboard does not come with SFF-8086 connectors, just regular SAS-2 connections and maxes out at 16 devices. 16 is fine for his current setup but it sounds like he is wanting to expand in the future. That makes sense however if you are of the mind that SAS Expanders are a bad thing... I am not.

Why would he want a 9265? You can't use the cache it offers if you are running it in JBOD mode.

Yes he is looking for an alternative to a 100k enterprise solution, but obviously saving money is great.. otherwise you would have been recommending SLC or PCI-E based storage and not consumer MLC SSD's.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I never said to not go with an Ivy Bridge E. You mention the benefit of additional cores but completely disregard the large price/performance advantage that a E5-2650v2 offers over the 2637?

What price/performance advantage would that be?

Have you looked at what you're suggesting?

http://ark.intel.com/products/75269/Intel-Xeon-Processor-E5-2650-v2-20M-Cache-2_60-GHz

8 cores. 2.6 base clock. 3.4 max turbo. List price $1166.

http://ark.intel.com/products/75792/

4 cores. 3.5 base clock. 3.8 max turbo. List price $996 (ooo look it dropped).

If you only need maybe two or three cores, i.e. like FreeNAS, take the faster option. It is never likely to take good advantage of more cores, but if it really needed them, YOU'D WANT THEM TO BE FASTER CORES ANYWAYS, not 2.6GHz ones.


The 2208 on that motherboard does not come with SFF-8086 connectors, just regular SAS-2 connections and maxes out at 16 devices.

Where do you get your information from?

http://www.supermicro.com/products/motherboard/Xeon/C600/X9DR7-TF_.cfm

Why would he want a 9265? You can't use the cache it offers if you are running it in JBOD mode.

Oh. My. God. Did I totally entirely forget to explain that reasoning, more than once?

Why, look, no I didn't forget. You just didn't read.

$200 more for a BBU and you can make a heck of a SLOG ZIL device with it...

Picking a board that offers an LSI2208, on the other hand, is a clever choice, because the cost of a 9265-8i is about $600 all by itself, and it turns out that you can add a BBU inexpensively and then abuse the controller as an extremely fast SLOG ZIL device.

And that's why I included

$300 LSI2008 HBA for ZFS happiness

in my original reply, but I'll admit that that's rather oblique and only people who actually "got" all this would have understood the significance. Just because you have a RAID controller does not dictate that you cannot add in a different controller to handle the disks.
Yes he is looking for an alternative to a 100k enterprise solution, but obviously saving money is great.. otherwise you would have been recommending SLC or PCI-E based storage and not consumer MLC SSD's.

Yes, right. So if you're going for ESXi, and you need to be able to support sync writes, and you can hack in very low latency SLOG for a very low cost, instead of needing to spend big bucks later, then your answer is to .... just screw it and not worry about it? Maybe just set sync=disabled because that saves more money? I'm certainly not going to suggest that. I'll add in the low latency extreme endurance SLOG.

Look, it's pretty clear you've not read what I've written. I'm generally happy to discuss the merits of various things at length, but I've little tolerance for people who can't even process things when the answer's handed to them on a platter. I am not going to go another round of repeating what's already been said.
 

Nathan

Cadet
Joined
Sep 9, 2013
Messages
6
What price/performance advantage would that be?

Have you looked at what you're suggesting?

http://ark.intel.com/products/75269/Intel-Xeon-Processor-E5-2650-v2-20M-Cache-2_60-GHz

8 cores. 2.6 base clock. 3.4 max turbo. List price $1166.

http://ark.intel.com/products/75792/

4 cores. 3.5 base clock. 3.8 max turbo. List price $996 (ooo look it dropped).

If you only need maybe two or three cores, i.e. like FreeNAS, take the faster option. It is never likely to take good advantage of more cores, but if it really needed them, YOU'D WANT THEM TO BE FASTER CORES ANYWAYS, not 2.6GHz ones.

If something is able to take advantage of 8-16 threads from the dual procs you recommended, then 16-32 threads even at a 34% clock rate disadvantage would almost assuredly be faster. If it can't then don't bother with MP.

The extra Cache and lower wattage of the 2650v2 is also not a bad bonus.

Speaking of not reading.. "Supports up to 16 devices" (2 devices per SAS port).. as opposed to the 9265 which supports 128 because it has 8088 multi-lane SAS connectors.

Oh. My. God. Did I totally entirely forget to explain that reasoning, more than once?

Why, look, no I didn't forget. You just didn't read.
Yeah but youre whole point doesnt make any damn sense. You are suggesting that by getting the onboard 2208 he can save money by not getting the RAID card.. you dont realize that just because that motherboard has an LSI-2208 on it doesnt mean it has the BBU.. it is NOT directly comparable to an LSI-9265 card. I don't know how you can "add" a BBU to an oboard motherboard controller, or if you are just wording that terribly. So back to my point, the onboard 2208 is a waste with ZFS.

But keep on being condescending though.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll stupidly bite...

If something is able to take advantage of 8-16 threads from the dual procs you recommended, then 16-32 threads even at a 34% clock rate disadvantage would almost assuredly be faster. If it can't then don't bother with MP.

Except CPU usage is NOT going to be a limiting factor, even with the less expensive CPU. So you are only paying for more processing power that you will never use.

The extra Cache and lower wattage of the 2650v2 is also not a bad bonus.

Yeah, that's maximum TDP. What's idle TDP? Oh, Intel didn't tell you that did they? Exactly.......

And let me give you another hint.. as soon as you start putting performance limits as to how slow you are willing to let the server be(hint: you've set the bar fairly high by saying you want to use ESXi), you have also excluded making buying decisions that involve saving power by buying something slower. Some of your services are single threadedtoo, so who cares if it has 10,000 cores at 2Ghz. I want the system with 4 cores at 4Ghz.

Even more so if you want to go with ESXi. Have you even looked at the number of people in this forum that have failed to get ESXi to perform acceptably with their hardware? If not, you should go read up. Probably 90% of users give up because they didn't buy adequate hardware to start with, and they can't tune ZFS to work well enough despite having weeks of experimenting. More than 90% of users give up on ESXi after a few weeks because ITS NOT EASY. It's damn hard, even for experts. How do you think you are going to fare without a deep understanding of what ZFS does, why it does it, and how it affects performance of other components. Here's a shocker.. jgreco even said that if you plan to go ESXi you should be planning ahead now. Gee, both he and myself made the same recommendation now, and for a good reason too!

My FN server is virtualized, has 3 cores from an E3-2630v2, and I can do over 400MB/sec. And I know I could do more because CPU usage only goes to about 40%. So do you really think more cores is going to matter? Nope. Single core performance is where its at.

Speaking of not reading.. "Supports up to 16 devices" (2 devices per SAS port).. as opposed to the 9265 which supports 128 because it has 8088 multi-lane SAS connectors.

Yeah but youre whole point doesnt make any damn sense. You are suggesting that by getting the onboard 2208 he can save money by not getting the RAID card.. you dont realize that just because that motherboard has an LSI-2208 on it doesnt mean it has the BBU.. it is NOT directly comparable to an LSI-9265 card. I don't know how you can "add" a BBU to an oboard motherboard controller, or if you are just wording that terribly. So back to my point, the onboard 2208 is a waste with ZFS.

But keep on being condescending though.

I won't even dignify your comment with a response.

But anyway, I'm with jgreco. I'll step out of this thread. If you don't want to listen, that's totally your choice. Plenty of people don't listen to some of the more experienced users, and they come back regularly with posts of "I should have listened". I just had one not 2 hours ago! And he most likely has lost all of his data because he didn't listen. :(
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And not to sound like I'm putting jgreco on a pedestal, but he may very well be the authority on large servers on this forum.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If something is able to take advantage of 8-16 threads from the dual procs you recommended, then 16-32 threads even at a 34% clock rate disadvantage would almost assuredly be faster. If it can't then don't bother with MP.

No, it won't, because AS I KEEP SAYING, THE CLOCK SPEED IS THE KEY BIT. More than a few cores are generally useless. Core SPEED is the key. Given extreme load, more cores will be helpful, but the SPEED is still the key.

The point of the dual processor configuration wasn't to get more cores. IT WAS TO GET MORE RAM. Did you not see that above? Going 256GB with a single CPU following YOUR suggestion above costs $9500. Going 256GB with dual CPU's following my suggestion costs only $5000.

Are you tracking that?

The higher density 32GB modules are insanely expensive, plus they're only DDR3-1333. Your way requires them to get to 256GB.

The lower density 16GB modules are reasonably priced, plus they're DDR3-1600. However, to hit that speed and capacity, you're maxed out at 128GB per socket. So I can add a whole SECOND CPU and the total cost for 256GB PLUS a SECOND CPU is still about HALF of your approach.

Which method is more frugal?

The extra Cache and lower wattage of the 2650v2 is also not a bad bonus.

There's no extra useful cache. Cache is allocated per-core. If you cannot use the cores, you do not get the benefit of the cache.

Speaking of not reading.. "Supports up to 16 devices" (2 devices per SAS port).. as opposed to the 9265 which supports 128 because it has 8088 multi-lane SAS connectors.

The 2208 on the X9 board is delivered on dual SFF-8087. Again, failure to read. Or look. Look at the picture.

Yeah but youre whole point doesnt make any damn sense. You are suggesting that by getting the onboard 2208 he can save money by not getting the RAID card.. you dont realize that just because that motherboard has an LSI-2208 on it doesnt mean it has the BBU.. it is NOT directly comparable to an LSI-9265 card. I don't know how you can "add" a BBU to an oboard motherboard controller, or if you are just wording that terribly. So back to my point, the onboard 2208 is a waste with ZFS.

You add it, of course. BTR-0022L-LSI00279 and MCP-450-00001-0N. It would be a waste if you didn't need a SLOG device, but I would question the X9DRH-iTF that you suggested anyways, because it tops out at 16 DIMM modules - the X9DR7-TF+ is 24 (though admittedly in most full slot configs there are tradeoffs). So the X9DRE-TF+ is going to be better for any RAM-intensive operation, especially where there are such ridiculous cost differentials between densities.

And here's my point. We've GOT an X9DR7-TF+. We've GOT the BBU for it. I have yet to determine the actual limits of SLOG performance with it; it goes as fast as the hard disks that I throw at it. It's _awesome_.

Further, I've done some interesting comparisons of clock speeds. For reasons not related to FreeNAS, that X9DR7 of ours sports an E5-2697v2, which really is about the fastest CPU that you can shove on an E5 board, when looking at overall performance. But - shock, SHOCK! - it doesn't perform all that well as a NAS CPU, because the 2.7GHz clock is slow. An E3-1230V2 outruns it (though not by all that much). So when I'm telling someone who is building a dedicated NAS to focus on clock speed, there's a reason for it.

I'm not talking out my butt about hardware I've never seen.

But keep on being condescending though.

If it's condescending to talk to you like this, when you pretend to know what you're talking about with hardware you've not seen and four posts under your belt, and I actually know what I'm talking about when I'm discussing hardware we actually have here, plus sufficient expertise to have authored several of the commonly referenced forum stickies on the related topics, plus a few thousand posts, then I guess I'll just have to accept the label.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If it's condescending to talk to you like this, when you pretend to know what you're talking about with hardware you've not seen and four posts under your belt, and I actually know what I'm talking about when I'm discussing hardware we actually have here, plus sufficient expertise to have authored several of the commonly referenced forum stickies on the related topics, plus a few thousand posts, then I guess I'll just have to accept the label.

Quiet noob, before I turn you over my knee! And you aren't allowed to like it! :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh, and one more thing. If you've got a fileserver that's servicing seventy desktops, the return on investment of spending a few thousand extra dollars is very rapid if it means the desktops are more responsive and users can do their stuff more quickly. Assuming you're paying people peanuts, like $15K a year, that's three million in payroll over three years. If you can make people just half a percent faster, you've saved $15K.
 
J

jkh

Guest
I wonder if SJD is regretting asking the question yet. :(

I think it's always a bad sign when someone walks in, asks a fairly innocuous (and I thought well-worded) question and then a fight breaks out. Guys! All of you would probably get along just fine if you were in a room together, in person, in the company of liberal quantities of alcohol. Well, OK, maybe subtract the alcohol and substitute mineral water. Put you all in front of keyboards, however, and it seems like mere milliseconds before the porcupine quills start coming out (and yes, everyone involved in this conversation, with the exception of the original poster, has come across as crotchety and less than perfectly patient - I read the whole thread). Can we at least try to remember that we're arguing in the open, in front of someone who's jaw probably started descending towards the floor after the 3rd or 4th posting in this thread? :)

Thanks!
 

SJD

Cadet
Joined
Sep 24, 2013
Messages
2
jkh,

LOL. That thought did cross my mind. I was waiting for them to lay their appendages on the table and start comparing sizes. It was informative, albeit confusing. Basically nobody likes my configuration, I need more RAM, and a faster processor. I already own so I can make some changes easily (RAM and Processor). I'm betting that 1.7TB of L2ARC is ridiculously high so I'm dumping 2 of the SSD's used for that and adding 2 more 3TB disks.

The comment about the cost not being too much of an issue is accurate. I was looking to spend between 10-12k, but can easily go higher for significant performance improvement. We're an $800m company, we can afford it as we can afford the EMC, but I hate giving money to those bastards and I like trying out different options. It may end up being a 15k exercise in futility, or it may work well. We shall see.

I do appreciate the passionate feedback even if some posts were basically just "WTF?"

SJD
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The real problem comes down to FreeNAS has very specific and very unique needs compared to every other file server OS I've seen. For example:

1. Go with non-ECC for any file server that's not ZFS. Not the end of the world, but you might end up with some trashed files on your server if your RAM goes bad. Do that with ZFS and your entire pool will be trashed. If you do rsync for backups, well, kiss your backups goodbye too. Woohoo for automated corruption.
2. Go with NFS and CPU frequency matters a lot less than if you use CIFS. Most people here have never used anything except CIFS. But they will be quick to get upset when NFS can work with their older machine but CIFS doesn't.
3. ZFS has serious CPU needs compared to every other file system. And too many people are only gonna get pissed when their CPU is not powerful enough.
4. How many people show up here because "I can repurpose my circa-2000 computer and get 100MB/sec... woohoo!"? And don't get me started on people that have never seen 50MB/sec from a Windows server ever but are gonna hit the ceiling if they can't get 100MB/sec with even older hardware than their Windows server(see #3 here too).

And the list goes on and on.....

The point is that people don't know what to expect, but ignorantly set the bar very very high. I had my hopes high when I first started playing with FreeNAS. Some hopes came true as soon as I bought the right hardware(read: spent $). Other hopes went by the wayside(such as repurposing my old desktop). And until someone does their research to go from that ignorant and innocent person to someone with reasonable expectations, fights can and will break out(and they can and will continue to break out). Most of the experienced people here just want to send the person in the right direction. Instead they get all upset because their answer isn't the right one. Most of us aren't here to validate a bad build or unreasonable expectations, but for many threads, that's all the want.
 
Status
Not open for further replies.
Top