BUILD Enterprise Grade Build

Status
Not open for further replies.

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
I am the Director of Information Systems for a medium sized non-profit. I am looking to build or purchase a "SANs" system. For my whitebox solution, I was looking into using FreeNAS. I have a current FreeNAS running as a file server for my department for the last few years. It has been solid and stable.

Here is the whitebox solution I have been looking into.

Case: SUPERMICRO CSE-216E16-R1200LPB Black 2U Rackmount Server Case
MB: SUPERMICRO MBD-X10DRI-T4+-O Enhanced Extended ATX Xeon Server Motherboard Dual LGA 2011 Intel C612
Processor: Intel Xeon E5-2660 v2 Ivy Bridge-EP 2.2 GHz LGA 2011 95W BX80635E52660V2 Server Processor
Controller: areca ARC-1883ix24-8SA PCI-Express 3.0 x8 SAS RAID Adapter
Memory: Kingston 32GB ECC DDR4 2133 (PC4 17000) Server Memory LRDIMM QR x4 w/TS Model KVR21L15Q4/32
Cache Drives: Intel Fultondale 3 DC P3600 AIC 400GB PCI-Express 3.0 MLC Solid State Drive - OEM
Storage Drives: Kingston SSDNow KC300 SKC300S37A/120G 2.5" 120GB SATA III Enterprise Solid State Drive with Adapter

The setup would feature Quad 10gbe, Dual Processor, PCIe SSD for 400 GB of caching (in/out), 24 SSD as storage drives (RAID-Z3) and memory maxed to 768GB.

Priced at about $17,000 USD.

My goal here is maximum performance for multiple SQL databases including our new ERP system. Secondary use use of Hyper-V for MS Terminal Server or Individual VMs. Most of this would be done via iSCSI over 10gbe.

Some other SAN/NAS units/companies I am looking into are PureStorage, Tegile, and Nimble. My secondary option for the whitebox would to use MS Server 2012 R2 Datacenter, which I am currently using for all my VM servers.

Q1: Tiered Storage - Does FreeNAS set a priority of RAM>PCIe SSD > SSD Storage in the configuration. I know that SSD caching is possible.
Q2: FUTURE QUESTIONS
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
While you can go ahead and build your own white box, for something that size, it's probably a good idea to reach out to iX Systems (the maintainers of FreeNAS) and see if they can spec you a TrueNAS system, complete with support: https://www.ixsystems.com/contact-us/

I'm not sure if they provide special discounts for not-for-profit, but it wouldn't hurt to ask.
 

noobnas

Dabbler
Joined
Aug 18, 2014
Messages
20
I agree about contacting iX Systems about maybe building it. You server sounds awesome in theory, but one thing that sticks out to me is your CPU. It'd be a great chip for VM's and alike, but for your 10 GbE 2.2 Ghz just isn't going to cut it for single threaded workloads ( in this case SMB ) and you will throttle hard irregardless of drive performance. If you were going to use Windows shares, remember that SMB is NOT multithreaded. I have a couple of hexa core 2.8 Ghz x5660's that his between 30-50% load on a single core with just 1 GbE. Really 3 Ghz + would be recommended.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Hopefully you're not planning to use hardware RAID with FreeNAS.
Does FreeNAS set a priority of RAM>PCIe SSD > SSD Storage in the configuration. I know that SSD caching is possible.
I'm not sure what you mean by "priority" in this context.
ZFS uses RAM for caching when possible. This is known as ARC.
In a suitably configured system, a 2nd level of caching on an intermediate device can be beneficial, though it does consume RAM that would otherwise be available for ARC. This is known as L2ARC. You might use a PCIe SSD for this.
After that, you're talking directly to the main storage pool.
 

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
Hopefully you're not planning to use hardware RAID with FreeNAS.
No. I am planning on using RAID-Z3 with the JOBD on that controller

I'm not sure what you mean by "priority" in this context.
ZFS uses RAM for caching when possible. This is known as ARC.
In a suitably configured system, a 2nd level of caching on an intermediate device can be beneficial, though it does consume RAM that would otherwise be available for ARC. This is known as L2ARC. You might use a PCIe SSD for this.
After that, you're talking directly to the main storage pool.
By "priority" I was referring to data tiering.

I agree about contacting iX Systems about maybe building it.
I had already scheduled a quote call with iX Systems.

You server sounds awesome in theory, but one thing that sticks out to me is your CPU. It'd be a great chip for VM's and alike, but for your 10 GbE 2.2 Ghz just isn't going to cut it for single threaded workloads ( in this case SMB ) and you will throttle hard irregardless of drive performance. If you were going to use Windows shares, remember that SMB is NOT multithreaded. I have a couple of hexa core 2.8 Ghz x5660's that his between 30-50% load on a single core with just 1 GbE. Really 3 Ghz + would be recommended.
So are you stating that I should lower the number of cores and increase the GHz? Or just spend more for a faster CPU all around? SMB and Windows shares would not be hosted on the SAN. Those would be on my VM-Servers. The only IO would be via iSCSI.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Please do more reading. The controller you've suggested isn't recommended... stick with one of the tried-and-true LSI controllers, like the 9211-8i. You've also stated in your last message that your workload is VMs... that really changes the game. You'll need to include an SLOG device (two in a mirrored pair if you're picky... religious argument) and use striped mirrors, not RAIDZ. You'll also want to keep the total pool usage below 50%.

As far as data tiering, if you're referring to automatically moving data from faster storage to slower storage, I don't believe this is a supported configuration in FreeNAS. That's a very SAN-type requirement... which usually comes along with SAN-type price tags.

In short - having just built a somewhat beefy VM storage box myself... and not to sound negative... you either need to do a lot more reading or consult iXSystems so they can build you what you need.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Also for VM usage, you need to have some idea as to what your working set size is, and you almost certainly need more RAM. If you do any research on the forum, you'll find a very nice formula that I've been providing which is based on the E5-1650 v3. You can go to a dual board, which has some benefits, but it's likely you're better off doing something similar to the VM server we have here. The ARC is somewhat less useful when you have a large flash array, so you do not necessarily need a heaping massive pile of it.

For a filer, the E5's you want to consider are:

E5-1620v3 - 4 core
E5-1650v3 - 6 core, best bang-for-buck in Xeon

The E5-16's do not work with LRDIMM, just RDIMM.

E5-2637v3 - 4 cores per CPU
E5-2643v3 - 6 cores per CPU - fastest NAS platform for E5
 

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
Please do more reading.
Well, that is why I am here. I have a few months available on this project. I have been doing my research on SANs and NAS devices.

The controller you've suggested isn't recommended... stick with one of the tried-and-true LSI controllers, like the 9211-8i.
This is the second reason I am here. Looking at the list of recommended hardware.

You've also stated in your last message that your workload is VMs... that really changes the game. You'll need to include an SLOG device (two in a mirrored pair if you're picky... religious argument) and use striped mirrors, not RAIDZ. You'll also want to keep the total pool usage below 50%.
I stated the primary usage would be an SQL database (Application hosted on Hyper-V host) connected via iSCSI. I did add in two NVMe devices for caching. Are you stating I should add one (mirror) specifically SLOG? Isn't that what the "caching" is (which was my understanding).

As far as data tiering, if you're referring to automatically moving data from faster storage to slower storage, I don't believe this is a supported configuration in FreeNAS. That's a very SAN-type requirement... which usually comes along with SAN-type price tags.
The SAN-type solutions that I have looked into do offer some type of data tiering. IE the most used data is stored in the faster devices (ie, SSD). The least used data is stored on the long term storage (HDD). There are a few that do provide the most used storage in RAM. But from what you are saying FreeNAS does not have that as an option (which I am perfectly OK with).

In short - having just built a somewhat beefy VM storage box myself... and not to sound negative... you either need to do a lot more reading or consult iXSystems so they can build you what you need.
I do have a quote request out to iXSystems. I have been doing a lot of reading, but yes, more reading and research is needed on my part.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Also for VM usage, you need to have some idea as to what your working set size is, and you almost certainly need more RAM.

It's not terribly clear in the first post, but...
memory maxed to 768GB.

My understanding is that he intends to max the ram at 768GB. That should be sufficient :)
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
The choice of small 120GB SSD's with 768GB RAM is strange. I would go with less RAM and spend that on larger, possibly eMLC (if the pool will be very active) SSD's. Also for the SLOG use a P3700 instead of a 3600, the 400GB version isn't much more money.

That CPU won't work with that motherboard. The CPU is socket 2011, the board is 2011-3.
You need a "v3" series Xeon.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My understanding is that he intends to max the ram at 768GB. That should be sufficient :)

Sufficient or a waste. My guess: a waste. I'm thinking that a substantially more targeted system might provide a much less pricey alternative. See, the words I saw up top were
medium sized non-profit
. The biggest reason to go with massive ARC for VM servers is to allow L2ARC to more effectively cache useful bits. But if you've got an all-SSD pool, reads from the pool are approximately as fast as reads from the L2ARC, so maximizing the size of the ARC is no longer a primary concern.

There are some other things that stand out to me with this. While I do like the 216E16 chassis (our VM filer is a 216E26), the expander chip is ... limiting ... in this design. I've not had good luck with x8 wideports, and the use of 6Gbps SAS would limit the backplane to 24Gbps.

Here's an alternative design that I suspect would be pretty awesome:

SuperChassis 216A-R900LPB - $800
Supermicro X10SRL-F - $220
Intel E5-1650 v3 - $550
Samsung M393A4K40BB0-CPB 32GB x 8 units - $1800
LSI 9211-8i x 2 units - $400
Mainboard ports for remaining 8 or 10 slots if you get the two-bay rear drive tray
Intel DC S3500 480GB x 6 units - $2400
Intel DC P3700 400GB x 2 units - $1800
Add some Chelsio T420-CR 10GbE cards from eBay.

$8500. With LOT of room for additional SSD growth.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I agree - most likely a waste. And, with all these little SSDs (by the way, Kingston SSDs aren't known for longevity... these, even though they are enterprise, are only rated 2 DWPD), you end up with barely 600GB usable storage, assuming 2-way mirrors and honoring the 50% rule. Some may disagree, but Intel is where it's at for enterprise-class SSDs.

I'd also analyze your data to determine if it *really* all needs to live on SSD storage. For instance, the OS partition almost certainly doesn't. You've talked about data tiering, but you only have one tier... expensive. I'd consider adding a handful of spinning rust to give you some slower/bigger/cheaper storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's also working in favor of my proposed design, you could stick half a dozen WD Red 1TB 2.5's in there. And *STILL* be half empty.
 

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
Also for the SLOG use a P3700 instead of a 3600, the 400GB version isn't much more money.
Thanks for the advice. The p3700 were not that much more.

The choice of small 120GB SSD's with 768GB RAM is strange. I would go with less RAM and spend that on larger, possibly eMLC (if the pool will be very active) SSD's.
I am still debating on which storage method I want to go with. HDD or SSD. A 2.5" 300GB 15k SAS 6 is still $500. A 2.5 400Gb SAS6 eMLC SSD is $599. I could start out with 1TB usable of SSD RAID10 and expand later.

I am still in the beginning stages of building/quoting phase of this project.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Don't say RAID10... no such thing in ZFS-land. 2-way or 3-way striped mirrors would be your options.

Keep in mind that SSDs also have longevity concerns. I would recommend the DC S3700/3710 for your data drives, as they're rated for 10 DWPD (10 complete drive overwrites per day for 5 years). The 3500s are 0.3DWPD. The 3610s are 3DWPD, if you need something in the middle. Things like SQL servers tend to thrash drives. Pay very close attention to the endurance ratings, and please stay with something that's well known. Intel is my strong recommendation. You don't want enterprise-class SSDs from Joe's SSD and Bait Shop.

For sizing, for 2-way striped mirrors, you effectively get ~22-23% of the raw space as usable. If you have 6 400GB drives, that's 2.4TB raw... 1.2TB after mirroring... a bit of overhead for metadata - let's say 1.15TB left... then the 50% rule = 575GB usable.

One thing you haven't told us... how much space do you actually need?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The 2.5" 300GB 15K SAS drive is probably not a good choice unless endurance is a key factor, and these days I would still be tempted to go with a DC S3710 400GB instead - capable of sucking down 10 drive writes per day for five years, and only around ~$600. 15K drives are effectively dead.

Instead of picking one or the other, you could do both. Bump up the density on the SSD as I suggested above, leaving lots of bays free.

You probably really want to go read everything I've written on the topic in the last year, because I've effectively got a variation on the system that you seem to be seeking, running our VM loads here. :smile:
 

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
One thing you haven't told us... how much space do you actually need?
With most of the SANs I have quoted, I have told them about 5-10 TB after compression/dedup calculations. So about 2.5 to 5TB pre compression/dedup.
I don't for see the ERP SQL getting larger than 1TB hence the rest being used for some VM drives.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I did add in two NVMe devices for caching. Are you stating I should add one (mirror) specifically SLOG? Isn't that what the "caching" is (which was my understanding).
L2ARC and dedicated SLOG device are two quite different beasts. L2ARC is a second tier read cache (after ARC, which lives in RAM). A dedicated SLOG device caches the ZIL (ZFS Intent Log) on reliable high speed storage to improve performance when sync writes are required, which they presumably are in your application. A dedicated SLOG device can typically be quite small, which allows for significant over-provisioning for longevity.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
My secondary option for the whitebox would to use MS Server 2012 R2 Datacenter, which I am currently using for all my VM servers.
Regarding the existing VMs, if you already own Windows Server 2012 R2 DataCenter and are planning to run Windows Server 2012R2 VMs; then some consideration should be given to what the cost would be for not having those VMs on Hyper-V.

I am only mentioning this because per Microsoft licensing, the DataCenter version allows you to run unlimited instances of Windows Server 2012 R2 (all flavors) without requiring additional licenses. Now if you were to run Windows Servers 2012 R2 instances within VirtualBox (not sure if you are thinking about doing this); then you are going to need a license for each instance.

If the case is where you are simply housing the VHDX files on FreeNas and are running them on a different box with Server 2012 R2 Datacenter; then you should be fine with the licensing.

Not sure if it impacts your decision or not, just thought I would add that to the conversation.
 

timupci

Cadet
Joined
Jan 25, 2016
Messages
6
I am only mentioning this because per Microsoft licensing, the DataCenter version allows you to run unlimited instances of Windows Server 2012 R2 (all flavors) without requiring additional licenses.
This is exactly why I went with DataCenter.

If the case is where you are simply housing the VHDX files on FreeNas
This was also my plan for any SANs product.
 
Status
Not open for further replies.
Top