VM on one pool DATA on another

Status
Not open for further replies.

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Hey,
For the sake of this argument, let's assume we have 2 pool. One with RainZ configuration, and one with mirrored vdevs (hence higher IOPs).
Beside that we have a few VMs (~10) which in each of them (linux) we have a few 100 websites. Hose websites are CMS based (wordpress, joomla, drupal, etc). So basically, the VNMs require mostly high random reads.

I know I can move the VMs to the mirrored pool, but assuming enough RAM and network capabilities (10gGBE), how about mounting the /home (were all the website's files are) with NFS directly to the mirrored pool? Would I gain/loss anything? Maybe dedup the NFS share for all the identical files...?

Woukd appreciate your thoughts on this

Sent from my A0001 using Tapatalk
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You can certainly do this. In fact, this is the defacto standard for high-availability/load-balanced web server setups if you don't want to store the data in two places... you have some sort of HA fileserver/NAS/SAN in the middle, and you mount the content from there so each web server is always certain to be seeing the same data. Other options exist, but get progressively more klugey.

However, you're adding complexity. Have you identified an issue with the performance of your current setup? Is your mirrored pool full so you're looking to offload content? This may be a case of "if it ain't broke, don't fix it" :)
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Thanks!

Generally speaking the mirror dev pool is yet to exists
The current performance for the raidz pool is poor for obvious reasons, and bigger than what I can build with the remaining empty hdd slots. So i'm looking to also reduce used space, because I can't put the machines offline and rebuild the entire pool.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
If you're IOP-bound, it's all about the spindles (or the SSD, but that's another topic). Assuming we're talking about the build in your sig block, unless you convert the entire array to striped mirrors, you're not going to see an appreciable increase in speed.

There are plenty of threads on here about NFS performance for VMs, and it isn't an easy task. I'm running 14x 15,000RPM 450GB SAS drives for my VM store to get the performance I need.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
No it's not the build in my sig. And yes, I know I need mirrored vdevs, hence the research for solutions.

The current build is a 2 x 6 RaidZ (300GB 10k sas), but I can't really tell the 'needed' IOPs
I do have 12 slots available and a few similar 300gb drives to make the initial mirrors, until I can 'free' those in the raidz pool and add a full 7-10 vdev (2-3 way mirror) leavings few slots for extra juice...
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Trying to mathematically derive the required IOPS is voodoo black magic. The big storage manufacturers, ixSystems too I'm sure, have calculators to help figure out the requirements... but, at the end of the day, nothing beats getting everything up and running and seeing if things just "feel" slow.

Unless you have a lot of data that's happy on "slow" storage, my suggestion would be to simply migrate everything and not add the complexity of two datastores. However, you're going to need additional drives to make that a reality.

I've had good luck buying drives off eBay from trusted sellers and using them. I've been buying 450GB 15K SAS drives for $30. This is inherently risky... but a good burn-in plus backups mitigates the risk. Perhaps this is an option for you to consider? Get enough drives to fill up the remaining 12 bays, migrate all the data you can and back up the rest, then transform your existing RAID-Z array into mirrored vdevs and add them into the pool.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I agree!
And yes, the idea is fill 8-10 slots as soon as possible and migrate as much as possible.

Those 15k sound great and I would love to hear more about them (maybe privately?). The problem would be the mix with the 10Ks which I would really try to avoid..'
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
There's no difference between the 10K and 15K drives other than rotational speed, lower seek latency, and higher IOPS. On average, 10K SAS drives are good for about 140 IOPS, 15K for 175-200IOPS. Not a huge difference, but on a 12-drive array (6x2-way mirrors) you go from 840 IOPS to 1050-1200. Not a bad increase, for simply going to different drives.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
There's no difference between the 10K and 15K drives other than rotational speed, lower seek latency, and higher IOPS. On average, 10K SAS drives are good for about 140 IOPS, 15K for 175-200IOPS. Not a huge difference, but on a 12-drive array (6x2-way mirrors) you go from 840 IOPS to 1050-1200. Not a bad increase, for simply going to different drives.
I know the difference between the two, thanks.

in any case, now that I see your sig, those 15K drives, are 3.5, and I need 2.5 (at least for now).
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Yes, my array is all 3.5" stuff. Cheaper, higher capacity, and I'm not constrained for size.

Actually the 2.5 is somewhat in your favor, performance-wise. Lower seek latency (and thus slightly higher IOPS) because the heads don't move as much.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I know, I still need 12 disks and a plan ;-)
 
Status
Not open for further replies.
Top