Question: better option for vm storage

Status
Not open for further replies.

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
hey guys quick question.....

I'm considering moving my esxi storage (at home) to my freenas box. Which of these would be the best short and long term solution.

Current storage: 6x 3tb WD RED PRO in raidz2( freenas w/ 32gb memory MAXED) + 2x 500gb crucial ssd raid1 (in esxi host)
a) use the 2x SSD in a mirror than migrate that to raid10 when I purchase 2 more drives
b) use the 2x SSD as zil/log for raidz2 storage.
c) do both a + b

It will be running mainly low usage test vm"s; however there is 2 heavy database vm which are heavily used daily for development and I use this daily for testing and dev for work.

EDIT: At this stage it would not be possible to rebuild the spinning disk pool. Im looking for a solution short term to tack on, than what i should aim for.

Any friendly advice is welcome. Thanks
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Rebuild your pool as striped mirrors, then use the SSDs for SLOG. Remember, databases are IOPS-bound... and you only get the IOPS of the slowest drive in each vdev. A single RAIDZ2 vdev is going to choke with any reasonable load from a database, especially if you try doing anything else at the same time.
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Rebuild your pool as striped mirrors, then use the SSDs for SLOG. Remember, databases are IOPS-bound... and you only get the IOPS of the slowest drive in each vdev. A single RAIDZ2 vdev is going to choke with any reasonable load from a database, especially if you try doing anything else at the same time.
Thanks for the reply tvsjr. At this moment it would not be possible for me to rebuild my hdd pool.
a) I'm at 60% and swapping this to mirrors would drop my capacity a fair amount until i added more 3.5" drives (which is not really possible in its current case).
b) Simply don't have time to move that amount of data atm.

That's why i was asking if it made more sense to use the SSD's in a separate pool as I was unsure if the SLOG would offer better performance in this case. I am however longterm planning to move to mirrors; but most likely not for at-least another 6 months (finishing building my house, eatting my $$).
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What problem are you trying to solve by moving away from using the local storage in your ESX server?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, it's pretty rough to compete with the performance of DAS.

I just approved a new pair of hypervisors here. As a penny-pincher, I'm not against spending sensible money... but at this point that seems to be local SSD where possible. For example, the Intel 535 480GB has 40GB/day write endurance and has been on sale recently for $150. The DC S3500 is something like 150GB/day but is around $330. I figure it's cheaper to put five 535's on a RAID controller in two sets of mirrors plus a hot spare, let them get beat on, and if one fails in two or three years, there's a hot spare, and then I can replace it with a $100 480GB drive with even more endurance. Unless you actually need massive endurance (like the S3710's incredible 4TB/day) this seems like the way to go.

I'm also sticking three WD Red 2.5 1TB's in the hypervisors in RAID1 with one as a hot spare. Talk about cheap. All together, this ends up sub-$1000 for ~2TB of storage, half of which is SSD, all of which is RAID1-with-spare. The RAID controller does cost something though ;-)
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The big problem with DAS (and why I'm hosting VMs on my FreeNAS) is the lack of shared storage for vMotion (or simply migrating VMs from system to system). If this isn't a requirement, or if you have an alternative solution (like active/passive failover at the application level), DAS is almost certainly the way to go.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can potentially use storage vMotion to move one VM at a time onto a small shared NAS and then back onto the other machine :smile:
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
True, although with a dramatic increase in the time it takes to move a VM.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, but basically it is going to be a ton faster the rest of the time. I mean, look, I like FreeNAS and I'd love for it to be the most awesome solution for VM storage, but in order to get to that point I basically had to build a $5000 box, and local SSD on a local RAID controller is still a lot faster - for about $1000.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It would be nice to hear what problem the OP is trying to solve though. I got the impression there might only be 1 ESX host, so shared storage and storage vmotion would be a moo-point (like what a cow thinks - Joey from Friends).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
plus bonus points for anyone who can make sense of the references
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
It would be nice to hear what problem the OP is trying to solve though. I got the impression there might only be 1 ESX host, so shared storage and storage vmotion would be a moo-point (like what a cow thinks - Joey from Friends).
Indeed currently I do have 1 ESX box. However the plan is to add a second sometime next year. At this stage I will decide whether to stay with vmware or migrate to hyper-v. But a conversation for another time.

Currently the vm's are running on 2x SSD separately. I was originally planning on add an additional 2x SSD and changing this to raid10.
#1 To do this in the existing chassis i would need to add an raid-card (refuse to use on-board as its a standard sata controller). Yes i could buy a card (such as the IBM 1015) but that's even more money and kind of pointless if i add another host.
#2 If i wish to run 4 2.5" drives in my chassis i actually cannot use the pci-e slot as the case is 1u and due to space restrictions it would prevent me from using it.
#3 I will most likely (95% sure) be adding at-least 1 more host within the next 12months.

Hope this helps
 

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
Yes, but basically it is going to be a ton faster the rest of the time. I mean, look, I like FreeNAS and I'd love for it to be the most awesome solution for VM storage, but in order to get to that point I basically had to build a $5000 box, and local SSD on a local RAID controller is still a lot faster - for about $1000.

EDIT: Never mind this comment, it was wrong.. very wrong.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
vmotion naturally requires a shared datastore of some sort, you weren't wrong. However, the expense in acquiring such a thing so that you can seamlessly move a VM is kind of high.
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
vmotion naturally requires a shared datastore of some sort, you weren't wrong. However, the expense in acquiring such a thing so that you can seamlessly move a VM is kind of high.
Kinda correct...

You can vmotion guests between hosts without shared storage (AKA iSCSI NAS/SAN), but with caveats:
  • VMWare 5.1.x or higher
  • Essentials plus (I believe this is the minimum) or higher license
  • The migration has to be initiated from the 'web client'
I've got access to a enterprise plus lab setup, and I verified that vmotion of a guest using local storage of the origination host to another host in the cluster using the target's local storage does work without having to storage vmotion the disk image to a shared location first.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Huh. How'd I never notice that. Oh. Probably because I hate the web client and most stuff here is already on shared storage. :smile:
 

Daurtanyn

Dabbler
Joined
Jan 12, 2016
Messages
11
I'm also a big fan of shared NFS storage. But, as stated, it doesn't beat DAS. My suggested solution is to use shared-redundant-DAS, SRDAS.

Several storage vendors offer external raid controllers with quad SAS host facing connections. Using SAS-3 redundant connections that gives you 96Gb (yep, almost a 100Gb connections between the four ESXi hosts and their local raid controllers.)

I call this group of four execution hosts and associated SRDAS a VM island. You have instant storage Vmotion between ESXi hosts within the VM island.

For migration between VM Islands, you use an NFS storage unit visible from all ESXi hosts on all VM Islands and the two hop method mentioned earlier in the thread.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I believe what you're discussing has been more commonly referred to as a SAS SAN, such as the Dell MD3000 along with up to four HBA's (not RAID controllers). I'm also aware of the LSI syncro solution.

The usual problem with the MD3000 type devices is their relatively poor performance; while the links from the HBA to the RAID controller might be 24Gbps, you're still limited to hard drive speeds (at best) for performance.
 

Daurtanyn

Dabbler
Joined
Jan 12, 2016
Messages
11
The Dell MD series (OEMed from NetApp) have improved in recent years. The current offerings offer "disk pools" which better distribute the I/O load across the spindles. Additionally, the recent versions support flash drives as read cache.

The per lane speed of SAS3 is 12Gb. so each four lane connection is 48Gb. So, dual port HBA cards in each host (each connected to a raid controller) will provide 8 Sas3 lanes to the storage system.

Of course, all that bandwidth is only going to be available if the controllers are active/active, meaning you can round-robin accros both. Older generations were active/passive. Recent code generations ARE active/active so you CAN round-robin.

SAS SAN, in my mind involved a SAS switch, like the SAS6160 (but its only 6GB). There are no SAS3 SAN switches, to my knowledge. Still, Direct Connections from a host to a raid controller are not a Network, to my way of thinking.
 
Status
Not open for further replies.
Top