Adding drives to an existing array, sanity check ...

Status
Not open for further replies.

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
Hi all ... I have an existing ZFS raid array, running mirrors. THere are currently 8 pairs of 3TB drives, put together to create a 24TB ZVOL. Im starting to get up there in usage. About 50% of this ZVOL. SO I'd like to throw some more drives at this system. I have 12 open bays, so I was thinking about adding another 12 drives, 6 pairs, to give me another 18TB of space. Is this something that can be added to an existing array? Best practices?

I definitely want to make sure I've dotted "i"s and crossed "t"s before doing anything because data is live for VMware back end, and critical.
 
Joined
Feb 2, 2016
Messages
574
Yes. Adding mirrors to an existing pool is supported and a common practice. You shouldn't have any problems at all. (I'd be remiss though if I didn't add "make sure you have a full and complete backup before adding additional VDEVs".)

You might want to think about how you are using your storage. While mirrors are quick like a bunny, they aren't efficient in terms of storage space. If you have a mix of data, some of which requires speedy access and high IOPS, keep that on your striped mirror. On the other hand, if you have bulk data that doesn't meet that description (media, backups, regular office documents, etc.), you may be better off adding a RAIDZ2 pool.

mirrors... 12 X 3TB = 18TB
RAIDZ2... 12 x 3TB = 30TB

You also need not add the same size drives to an existing pool. The price on 4TB and 6TB drives is fairly reasonable. If I were buying today, I'd likely go with 6TB drives.

mirrors... 6 X 6TB = 18TB (using half as many bays, ports and power)
RAIDZ2... 7 x 6TB = 30TB (using almost half as many bays, ports and power)

Cheers,
Matt
 

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
You also need not add the same size drives to an existing pool. The price on 4TB and 6TB drives is fairly reasonable. If I were buying today, I'd likely go with 6TB drives.

mirrors... 6 X 6TB = 18TB (using half as many bays, ports and power)
RAIDZ2... 7 x 6TB = 30TB (using almost half as many bays, ports and power)

I would be very interested to hear what @jgreco has to say about mixed drive sizes? I mean, I would be all for going with 4TB or 6TB drives, for the next 12, but is that going to cause me other issues?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You might want to think about how you are using your storage. While mirrors are quick like a bunny, they aren't efficient in terms of storage space.

You would never do that for a production VMware backend datastore seeing typical usage. I don't have any idea why he thinks this is a good idea. RAIDZ is basically evil for this sort of usage, as the IOPS will see a significant bottleneck. Plus be aware of the ZFS RAIDZ variable space allocation issue, so for a poorly designed combination of pool and ZVOL, you can end up eating massive amounts of space, far more than mirrors, if you do it wrong.

It's definitely a best practice to put all your ISO's, backup tarballs, and other big files on a RAIDZ in a different pool, though you can store those on mirror space as well as long as you don't mind the space waste. Can't always win that.

More curious is that you've been able to make a single ZVOL work. You might want to be careful to experiment if you add more disk space. VMware limits the queue depth etc, maybe take a look-see at stuff like

http://www.pearsonitcertification.com/articles/article.aspx?p=2240989&seqNum=4

which is just sort of randomly picked because I'm late and I've gotta run.

I would be very interested to hear what @jgreco has to say about mixed drive sizes? I mean, I would be all for going with 4TB or 6TB drives, for the next 12, but is that going to cause me other issues?

If you replace existing drives in a vdev (mirror pair in this case), it will not do anything for you until all (both in your case) drives in that vdev are upgraded.

When the size of a vdev is increased, there will be a strong tendency to favor that one single vdev for writes for awhile, as it appears to have (and does have) significantly more free space than other vdevs. ZFS does not "stripe" as people mistakenly call it. It opportunistically allocates new blocks, strongly preferring the least-full vdev.

For an array of mirrors, you can bump the size of the array by picking a victim^Wvdev, adding a third (new) larger drive to the mirror, letting it resilver, disconnecting one of the old drives, inserting another large drive, let that resilver, and remove the old drive, leaving you a mirror vdev increased in size. It will see heavier traffic for awhile. Make sure the autoexpand property is set before you begin.

You can also add additional vdevs, which appears to be the original suggestion. You will gain additional IOPS as the number of vdevs increases. Again, ZFS will favor the new drives, and it will do so even more aggressively than in the replacement scenario. If you add a whole bunch at once, this is unlikely to be a problem.
 
Joined
Feb 2, 2016
Messages
574
I don't have any idea why he thinks this is a good idea. RAIDZ is basically evil for this sort of usage, as the IOPS will see a significant bottleneck. Plus be aware of the ZFS RAIDZ variable space allocation issue, so for a poorly designed combination of pool and ZVOL,

Maybe I was unclear... I'm not suggesting mixing VDEV types in the same pool. VMs need IOPS. Keep the VMs on striped mirrors. What I'm suggesting is maybe he needs two different pools and that using mirrors for bulk storage is inefficient.

ZFS will favor the new drives, and it will do so even more aggressively than in the replacement scenario. If you add a whole bunch at once, this is unlikely to be a problem.

Exactly. At just 50% full and adding six VDEVs to an eight VDEV group, performance will be slightly less (six spindles versus eight) while the new VDEVs reach equilibrium. Once that happens, performance will be much better (14 spindles versus eight).

Cheers,
Matt
 

zimmy6996

Explorer
Joined
Mar 7, 2016
Messages
50
More curious is that you've been able to make a single ZVOL work. You might want to be careful to experiment if you add more disk space. VMware limits the queue depth etc, maybe take a look-see at stuff like

http://www.pearsonitcertification.com/articles/article.aspx?p=2240989&seqNum=4

which is just sort of randomly picked because I'm late and I've gotta run.


Ahh!!! There is the grinch!!! :) How's it going friend?

So yeah, I completely understand the mirrors/raidz arguments. You taught me well a year or so ago when I originally set things up. So i'm completely setup with mirrors since this system is only a backend for VMware.

With that in mind, right now there are 8 mirrors, 3TB each, for a raw storage of 24TB. I then provisioned a VMDK store there that was 12TB in size (50% of the array capacity). And each one of my two FreeNAS boxes presents a single 12TB block storage device to ESX for hosting guests.

My intent here is to add another 18TB of storage (6 more 3TB mirrors) taking the raw storage up to 42TB, and then expanding that block storage for iSCSI from 12TB to 21TB on each machine.

Does that sound like a safe plan?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm skeptical about expanding the size of the ZVOL, even 12TB sounds like too much, but if that isn't a problem for you currently, it may not be a problem if you expand. That's all I got for ya.
 
Status
Not open for further replies.
Top