BUILD Looking for Second Opinions - Hyper-V Cluster Storage

Status
Not open for further replies.
Joined
Mar 5, 2015
Messages
3
Looking for Second Opinions

The plan is to set up a new storage device using FreeNAS 9.3 and keep the costs around 13,000

I currently have the following hardware planned:
Qty 1- SuperMicro Chassis - 216BAC-R920LPB (room for 24 2.5inch drives) $1300
Qty 1- SuperMicro Motherboard - X10DRC-T4+ (4 Intel 10GB ports & LSI Controller) $750
Qty 2- Processors - Intel Xeon E5-2623v3 (3Ghz 4 Core) $500each
128GB DDR4 ECC Memory $1400
Qty 3- LSI 9300-8i HBA $250each (Multiple 12G SAS lanes may or may not be necessary for this SSD setup)
I also do not want to have to crack open the case and add cards later.
Qty 1- Intel X540-T2 (10GBaseT NIC for expansion)
Qty 8- Samsung 845DC PRO 800GB SSD's $850each (to start, later expansion up to 24, Drives has powerloss protection, high endurance and 20% overprovisioning)
Qty 2- Samsung 845DC EVO 240GB $200each Mirrored Boot drives (the 845DC's are necessary for the capacitors)

The plan is to run mirrored drive pair VDEV's striped across an 8 drive Zpool initially and as drive capacity is needed add paired mirrors to the pool up to a maximum of 12 drives. A new pool would be created after 12 drives using a different lun/nic and iscsi target IP. My thoughts/hopes are 12 drives will come close to saturating an interface.

Our current environment has 12 physical servers and 6 Virtual Machines that we are planning on moving over to a Hyper-V failover cluster using the new SSD FreeNAS system as an Iscsi target for the Cluster Shared Volumes and Quorum. 2 of these machines are currently running SQL on server 2008 and will be migrating to VM's on 2012R2/SQL2014 with 128GB RAM allocated to each. All of the databases,logfiles and tempdb currently reside on the same LUN on an EMC VNXe3150. I have my doubts that the EMC can support the IOPs I need for the new VM's and believe we can see dramatic improvements using the SSD NAS. The plan is to host the Virtual Machines on the FreeNAS LUN as well as the databases and logfiles. I will use the EMC to host the tempdb and a few of the low activity VM's.

Our largest databases are around 30GB with around 70 devices connecting at anygiven time and 15 databases total but only 3 that are fairly active. I set up a freenas fileserver running 9.2.1.8 last year that has been performing like a rock star. It has a single Xeon 1670v2, 64GB Memory, mirrored 845dc 240GB Os drives, 10GB Nics, 128gb l2arc. ZFS config is 16-WD Red 4TB Drives, 4Drive Raidz2 VDEV's, 4 VDev's in the ZPool. I may move a couple of things to this storage device as well to split things up. It is currently only serving as an ISCSI target for a CentOS Samba server (I never could get CIFS to function in a stable enough state for production and opted for the ISCSI route using Samba3 and Winbind.

The main goal of this project is to ensure our storage devices has enough IOPS to handle 15-20 VM's and some SQL databases, bandwidth being secondary but still important. I also want to have expandability for future projects such as VDI which could add 40-50 Virtual Machines. Ideally I would like to have some kind of HA/loadbalancing or atleast replication set up building an identical server in the future to replicate too. It makes me nervous only using mirrors in the ZPOOL in a production environment, but this is most likely necessary to keep the IOPS high and hopefully the low failure rates of these drives and powerloss protection offsets that a bit. We also keep nightly onsite and offsite backups of all systems and will be live replicating the VM's to an offsite Hyper-V server. I've also considered looking at the TrueNAS systems but not sure what configurations they have and how difficult it is for me to add SSD pairs as needed, but I will most likely contact them for a Quote.

Feel free to blow holes in my logic and any provided suggestions are welcome.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
why not use a three way wide mirror? more read speed, more redundancy
 
Joined
Mar 5, 2015
Messages
3
Great Point would definitely take away some of the risk of just using two way mirrors and I could split it into 3 Pools once I have fully expanded the NAS. 2 9disk 3x3's and 1 6disk 3x2. Only thing I really lose is 4 drives of capacity vs 2 way mirrors, thanks for the pointer definitely worth considering when I start throwing this together.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Do not use FreeNAS as primary production storage. It does not support failover to another system. Please give ixsystems a call and request pricing on a TrueNAS, for which SAS SSDs are required.

I wouldn't recommend Samsung SATA SSDs in servers for any purposes. With databases, SAS HBA and Expander you want SAS SSDs. Don't waste that money on boot SSDs, the SATA DOMs are perfectly fine:
http://www.supermicro.nl/products/nfo/SATADOM.cfm
You don't need any of the SAS HBA cards. The board has one LSI SAS chip onboard, which is perfectly fine. For a chassis I'd rather pick this:
http://www.supermicro.nl/products/chassis/2U/216/SC216BE1C-R920LP.cfm
which comes with an SAS 12Gbps Expander. It'll give you an aggregated bandwidth of 9.6GB/s - limited by the controllers PCIe connection to ~7.8GB/s. IOPS are more important and any kind of SSDs won't saturate that link.
 
Joined
Mar 5, 2015
Messages
3
Thanks. I was definitely leaning on whether to use the extra HBA's or not but you pretty much answered my question there so I will ditch them and use the onboard. I will definitely use the chassis you recommended and look at using the Supermicro DOM's I am guessing a mirror boot device is not all that necessary. I was curious however as to why your recommend against the samsung 845DC Pro 800GB models? They are supposedly enterprise drives the main glaring difference I could see was that they use sata instead of sas which from what I understand would still work on the expander. Correct me if I am wrong there. They are rated at 10DWPD as most of the enterprise SAS drives, similiar over provisioning, and a capacitor bank to protect against powerloss. They do use 3d NAND vs SLC which could be a downside but with comparible drive writes and MTBF ratings, I am unsure it is that big of an issue at 1/4th the price.

I have also contact IX and have a dialog going with one of there Architects. The initial pricing on the driveless HA config was not bad at all just waiting to see spec/performance wise what we come up with for our environment and if it is something I will be able to sell to the number crunchers.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
I'd always use mirrored boot since that's supported in FreeNAS 9.3. My low-priced SSD preference would be Toshiba HK3E2 800GB, but usually you'll be paying $2-2.5k per high-end 800GB SSD. Also these would come with 12Gbps SAS interfaces, so less issues are to be expected in regards of transfer speed.
 
Status
Not open for further replies.
Top