SAN Capable of 240 VMs, each assuming will use same IOPS of single 7.2K disk

Status
Not open for further replies.

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Our goal: to make a SAN able to provide storage, at, on average, the speed of a single 7.2K disk, for up to 120 VMs
Exception: some VMs on this system will use a bit more, but I believe the above is a good starting point for what we're trying to accomplish

Plot: we're building a three node hyper v 2012 R2 cluster, using FreeNAS's iSCSI for connectivity - built correctly, avoiding the avoidable bottlenecks
Each node will have/be:
HP DL360P G8
2x Intel x520-DA 10G NICs connected via twinax to SAN (eliminates need to power 10G switches)
1x 256GB DDR3 ECC
2x 2.2Ghz 10 Core
SAN will have:
Dell R720xd w/ 24 2.5" Bays
256GB DDR ECC
2x 2.9Ghz 8 Core
6x Intel X520-DA 10G NICs
2x Intel 750 (or P3700 if suggested) for SLOG setup in mirror
1x Intel 750 (or P3700 if suggested) for L2Arc
1x Avago/LSI 9300-8i HBA (dual sas expander)
15x 600GB 15K SAS 2.5" setup in striped mirror w/1 HS ~4,200GB
9x SATA SSD drives configured as "direct attach" iSCSI LUNs for certain VMs
2x sas expander cables off of backplane to rear of server for adding JBOD when more storage needed
Other notes:
We will have sync writes on for data protection, requiring the use the SLOG

Desired Capacity and Assumptions:
  • As a three node cluster, our max capacity would be the aggregate of two: 512GB RAM and 40 2.2Ghz pCPUs/cores (not including hyper threading)
  • Using a 1:6 pCPU:vCPU ratio, I believe we can fit 120 VMs w/ 2x vCPU's and 4GB RAM
  • That being said, again, Our goal is to support 120 VMs as if they each had their own 7.2k disk (obviously with varying sizes)
  • Based on an IOPS model of 100 per VM (which is an okay metric for a single 7.2k disk, right?), we would need 12,000 IOPS
  • And if we also used the number 100 for the transfer rate too, 100Mbps, and assumed at most 25% of the VMs would use that at any given time, that'd be 24Gbps to or from the SAN over 6 different 10Gbps links
Quests pertaining to everything above:
  1. What problems do you see with the above setup?
  2. What problems do you see with the above capacity desires and assumptions?
  3. Is creating one pool, and then chopping that into either zvols or file blocks the best way to go?
    1. It is my understanding that our Intel 750's once used on pool 1, for example, could not be used on other pools added later
    2. It is also my understanding that one pool would be best, because it would get faster as vdevs were added to it, verses created a new pool
  4. If my IOPS assumptions above are correct, will this SAN as described perform at or above the desired 12,000 mark?
  5. If the answer to the above is "sort of" or "sometimes", what could we do to fix that?
  6. Also, will it be capable of transfer speeds of 24Gbps?
  7. What single points of failure in the SAN, besides the SAN being a single device itself, do you see that we can avoid?
  8. Along with the above question, if we could squeeze in anther 9300-8i HBA (or a differnt set of HBA's), can we set them up for redundancy in FreeNAS/FreeBSD?
  9. Do you see an issue with expanding via dual sas expander cables off the Dell R720xd's backplane to a JBOD or two over the next couple years as we need more size?
  10. Regarding the above question, if not an issue, would adding to the existing pool by adding a vdev be the best option? Or creating a differnt pool (probably the same answer as #3 right?)
  11. Anything else I should consider? Should we start from the ground up with a different solution to accomplish what I've described under "Desired Capacity and Assumptions"?
Thank you for your time and input. It is much appreciated. We're about to drop 20-30k into this, and as a small company, it's a big move for us. Your input in invaluable and might just receive a little thank you.

Looking forward to your responses.


-Steven Sedory, Vertical Computers
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
My suggestion would be to call iXsystems for a quote or reach out to Nimble storage for a quote. With what you are wanting you would be much better served buying a system with high availability dual controllers and support.

We run Nimble arrays and NetApp arrays at my data center and I would recommend Nimble all day long.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
My suggestion would be to call iXsystems for a quote or reach out to Nimble storage for a quote. With what you are wanting you would be much better served buying a system with high availability dual controllers and support.

We run Nimble arrays and NetApp arrays at my data center and I would recommend Nimble all day long.

Thanks for the suggestion. I'll check Nible out (already talked to iX on previous projects, and it's just way out of budget).

However, we have built some SANs near to this in the past, we just havent't pushed over 100 VMs. It's just been a while, there are some new technologies out, and I'm dusting off the rust by talking with people like yourself. I'd really prefer to work with FreeNAS. I know the capability is there.
 
Status
Not open for further replies.
Top