BUILD HA storage system for 40 VMs

Status
Not open for further replies.
Joined
Jun 17, 2016
Messages
3
Hi everyone,
we are looking to build a network fast storage for virtualization infrastructure.
We are looking to manage 40 to 100 VMs, from 4TB to 20TB using FreeNAS.

Here there is our hardware list:
1 SSG-2028R-DE2CR24L Supermicro Barebone 2U SuperStorage Server with two Nodes - Each Node supports up to Dual Intel Xeon E5-2600 v4/v3 family processors
http://www.wiredzone.com/supermicro...ne-dual-processor-ssg-2028r-de2cr24l-10026032
2 BX80621E52670 Intel 2.60GHz Xeon E5-2670 8-Core Socket-2011
http://www.wiredzone.com/intel-components-cpu-processors-server-bx80621e52670-32027806
8 MEM-DR416L-CV01-ER24 Supermicro 16GB PC4-19200 (DDR4-2400) 288-pin RDIMM ECC Registered, VLP, CL17, 1.2V, 2Gx72
http://www.wiredzone.com/supermicro-components-memory-ddr4-mem-dr416l-cv01-er24-10024861
8 MZ7LM960HCHP-00005 Samsung Hard Drive SSD 960GB SATA3 6Gbps, 2.5in
http://www.wiredzone.com/samsung-components-hard-drives-enterprise-mz7lm960hchp-00005-10025142
0 AOC-MCX312A-XCBT-MLN Supermicro ConnectX-3 2-Port 10 GbE Network Adapter http://www.wiredzone.com/supermicro-components-network-adaptors-wired-aoc-mcx312a-xcbt-mln-10023269
2 MCX312B-XCCT Mellanox ConnectX-3 Pro 2-Port 10 GbE Adapter
http://www.wiredzone.com/mellanox-components-network-adaptors-wired-mcx312b-xcct-10024708

Do you think it'll be suitable for us?
What performance it'll be aspected? in term of IOPS and throughout?
 
Joined
Mar 22, 2016
Messages
217
You're going to need to change your processors. You have either a V1/V2 E5 not a V3/V4. The later of those two is around 5-7 times the price.

That being said, if you don't need the memory density of the E5-2600 series, it would be a much cheaper route to go with a UP board and a E5-1650 V4.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
freeNAS does not support HA. If you want HA call IXsystems for a trueNAS system or look for another solution.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The E5-2670 is a Sandy CPU and will not work with that platform. There's no need for a dual CPU dual node configuration here anyways.

The ConnectX-3 adapters aren't a preferred selection. Please see the 10Gig sticky in the networking subforum.

In re the topic, FreeNAS does not support high availability. However, managing resources as a datastore cluster and using storage vmotion to migrate data around for downtimes works.

You might want to look instead at the recipe I've mentioned numerous times here for the VM filer we have here.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This system shared ssd backplain beetween nodes, they are not 2 server with 2 storage arrays. Is this system supported by FreeNAS?

No, not really. There's a case to be made for such a deployment as a standby node configuration, but that's not really a topic for beginners.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
No, not really. There's a case to be made for such a deployment as a standby node configuration, but that's not really a topic for beginners.
I think building a second box and doing storage vmotion, as you suggested, is a better way to go then an offline node.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Have you got any information on standby node configuration? Any link?

No. And in general, I'd say the value is extremely low. A lot of capex cost but in the end you still have a single physical box that can break.
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
The ConnectX-3 adapters aren't a preferred selection. Please see the 10Gig sticky in the networking subforum.

As a data point, apparently the Mellanox adapter support is in 9.10-STABLE as of today.

It'll be interesting to see what the usage/stability/etc turns out to be like given a couple of months. Hardware wise the cards are generally pretty good, but that's only part of the story. Hopefully it turns out well. :D
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As a data point, apparently the Mellanox adapter support is in 9.10-STABLE as of today.

It'll be interesting to see what the usage/stability/etc turns out to be like given a couple of months. Hardware wise the cards are generally pretty good, but that's only part of the story. Hopefully it turns out well. :D

Happen to know if it includes the ConnectX-2? I did pick one up to play with because I hate putting hardware in the stickies that I haven't actually tried.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I've been thinking of getting some IB cards and a topspin 120 switch to try out as a cheap entry into 10G. Would be really nice if you can use the IB options to get 40G links.


Sent from my iPhone using Tapatalk
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
I've been thinking of getting some IB cards and a topspin 120 switch to try out as a cheap entry into 10G. Would be really nice if you can use the IB options to get 40G links.

Hmmm, that sounds like there's some mix up here (could be me though ;>).

From memory, Topspin 120 switches only support 10G Infiniband mode (called "SDR" which is ancient, and almost unsupported in modern software), rather than 10GbE (10Gb Ethernet) mode (as commonly used in FreeNAS setups).

FreeNAS (at present) doesn't support native infiniband mode unless you roll your own.

Am I grokking something wrong?
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
It was late and I was thinking the ConnectX stuff was the IB cards. It's just more SFP+ cards. And it does appear that there is IB ConnectX cards so that was what I was thinking. Although I see with the ConnextX-3 cards you can do 40GbE. That would be interesting to try out. It's the switches that really end up costing some coin. Direct Connect between servers is fairly affordable though.
 
Status
Not open for further replies.
Top