NFS XenServer VMs

Status
Not open for further replies.

MtK

Patron
Joined
Jun 22, 2013
Messages
471
striped-mirrors, striped-mirrors, striped-mirrors, striped-mirrors...
yes, I know, striped-mirrors!


now that we are clear on that:
I have a pool with 2 striped 6xRaidZ as a XenServer Shared Storage.
each drive is 300Gb SAS, and the server has 24Gb of RAM.

it's been working great until now, where all the VMs are shared hosting solutions, hosting 1000s websites mostly Open-Source CMSs (Wordpress, Joomla, Drupal, etc).
why do I mention this? to emphasise the fact that this is mainly random read intensive.

the heavy load, actually comes on backup moments, split into 2 main actions:
  1. when each VM (not necessarily at the same time) makes a TAR of the USER's /home/USER directory... again, lot of small reads (significantly more than the actual write of the tar.gz file). now multiply this by the number of USERs (and a few VMs), this becomes a very long (read) process.
    These tar.gz files are being stored on an NFS mount (out side of the VM's VDI) but still on the same ZFS pool. this is done to have a bit more of an "unlimited" storage.
  2. [mostly not at the same time] each VM's backup (from the NFS mount) is being rsync'd into an external ZFS server (different server, but still on the same local network).

as it seems, outside of the "backup time" everything works great, and yes the RaidZ does provide enough IOPs/throughput for the VMs. but on backup time, the VMs seem to need a little bit more juice.


yesterday and today I tested on the 'normal' not-under-stress situation and I do get 80-110 MBps (as I should from a 1Gbps connection).
I also created a new VM with minimal CentOS on different hosts connected to the same storage, and I do get the same good (80-110) results!


any idea what can I do (or at least test) for the 'backup-time' so the pool could actually handle the stress?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I'm probably stating the obvious but since I can't really put the store offline and rethink its structure, I have to come up with and online-improvement plan.

The server has 24 bay and only 12 are currently used/occupied.
So as I see it have these options:
1. Buy 6 drives and add a 3rd vdev.
2. Buy 2 SSDs and add L2ARC.
3. Start thinking 2/3-way-mirror...

To save money I'd prefer to be able to use those 300Gb disks...

Thought?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Have you looked at the system stats to ID the bottleneck? zpool iostat?

I'd look into adding RAM, then an L2ARC, and then a faster pool.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Have you looked at the system stats to ID the bottleneck? zpool iostat?
That's actually part of the problem. zpool iostat doesn't indicate any overload on the pool...

I'd look into adding RAM, then an L2ARC, and then a faster pool.
More than 24Gb RAM for a 3-4Tb pool?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What does RAM reporting show?

It sounds like it isn't a throughput problem, since you can saturate the gigabit link. Sounds like an IOPS issue. Hence the larger L2ARC. But you need more RAM to add a large L2ARC.

Or buy faster drives for the pool.

There's isn't a magic "extra-juice" option you can enable. :smile:
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
These are already 10k SAS drives.
How much more RAM are we talking about?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
How much is being used? How much is free? What is the avg wait time for your transactions. If you want me to guess, I'd say 256GB. and a 1TB SSD. But that is purely a guess given no data.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
How can I see those stats?
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471

MtK

Patron
Joined
Jun 22, 2013
Messages
471
as mentioned I now have 2 vdev of 6 drives each in RaidZ, and I have a total of 20 drives.
I want to convert it all to a 10 2-way-mirror pool.

any suggestion how to do this with as little downtime as possible?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
How much data do you have? Can it all fit on the 4 2-way mirrors? Then you could destroy the 2 drive vdevs and add those as 6 more 2-way mirrors.

Otherwise you will need some temp space somewhere else.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Unfortunately it doesn't fit in 4 drives or even 8.
And even if... I'm trying to avoid an unbalanced pool...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Then you need some temp space. Pretty simple but unfortunate.
 
Status
Not open for further replies.
Top