Single path to Multipath VDev migration

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
I searched around and really only came up with one 'relevant' thread that was almost four years old and didn't really have an answer. I'm running FreeNAS 11.1-U6 and have two shelves - the main one contains the processing power, controllers, etc and has a dual expander backplane which provides redundant connections to the two controllers, the second shelf is a JBOD unit that currently has a single expander backplane, but I've got a dual expander backplane coming for this one. Once the dual expander backplane is in, I believe the VDev should come right back up on the original disk IDs, but that would be over a specific path, not multipath, so as-is, if the active path were to go down, the VDev would go down. I thought I had read something on the subject a bit back, but have been unable to find it again. How would I go about migrating the VDev in the external JBOD enclosure from the current singlepath setup over to multipath when I get the new backplane installed (The applicable SAS discs, of course), without blowing it all away and recreating it?

Edit to add: this pool *IS* Encrypted as Chris surmised.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
(The applicable SAS discs, of course),
I am not sure what you mean here?
Do you have multipath capable SAS drives?

If your hardware is compatible, there should be no 'conversion' needed.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A zvol is not what you think it is. The word you're looking for is "pool". A zvol is a virtual block device backed by ZFS, effectively a special type of dataset that exposes a block device instead of a POSIX filesystem.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Do you have multipath capable SAS drives?

Yes - All drives in the main pool in this enclosure are HGST Enterprise SAS drives that support multipathing. There is also a two disk mirror that consists of two SATA drives which obviously do not support multipathing, so there will be no change for those.

A zvol is not what you think it is. The word you're looking for is "pool". A zvol is a virtual block device backed by ZFS, effectively a special type of dataset that exposes a block device instead of a POSIX filesystem.

You are correct, misused terminology... But I think you know what I mean :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Sorry, my terminology was wrong, but the intent was correct - FreeNAS with the VDevs, ZPools, etc has more layers than a 'traditional' RAID controller has - that being said, what will need to be done to migrate the SAS-based VDev from the current single path to multipath? Is it something FreeNAS can do automatically, or will I need to remove, re-add as multipath and resilver one disk at a time until they're all migrated?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sorry, my terminology was wrong, but the intent was correct - FreeNAS with the VDevs, ZPools, etc has more layers than a 'traditional' RAID controller has - that being said, what will need to be done to migrate the SAS-based VDev from the current single path to multipath? Is it something FreeNAS can do automatically, or will I need to remove, re-add as multipath and resilver one disk at a time until they're all migrated?
You have some other parallel posts going on and it appears from those that you may have an encrypted pool. This could make a significant difference in how this works out for you.
If the drives were not encrypted, you should simply be able to move the drives from one connection to the other. Encryption makes things more particular.
I hope someone with more experience in that would inject their knowledge into the discussion.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Just an update and to bring this back up, I got the new dual expander backplane and cabled it up, and as expected, in the HBA BIOS, it was seeing both paths for the SAS disks in the enclosure I just swapped the backplane in. Upon bootup, everything came up fine, and as pretty much expected, the SAS disks in the enclosure came up with their original, non-multipath IDs. So everythings running happy, just not multipath for those SAS disks.

Now, I'd swear I read something some time back about converting disks from single path to multipath that, if I recall correctly, involved removing the disks, making that disk into a multipath, and then putting that 'new' multipath disk into the pool and re-silvering (one at a time, of course). But I can't seem to find it anywhere. In theory, if that procedure would work for turning them into multipaths, the fact that the pools are encrypted *shouldn't* make a difference, because it wouldn't really be any different than if a disk had failed and been replaced with a new one. Any experts have any thoughts?
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Just another update, in case anyone is interested. Based on the info I believe I read before, I did some testing. I have four free bays in the enclosure that just got the new backplane, and it just so happens I have four 2TB SATA drives and one 2TB SAS drive - so I went through the motions to set everything up with encryption on the four 2TB drives as a RAID-Z1, same as the data volume in question, just smaller, presented an iSCSI target to my VMWare hosts, set up a datastore, added a HDD to my FS VM and stuck some data on it. I then went in and set one disk offline, removed it and inserted the 2TB SAS drive in its place. I then went in and selected 'replace' on the drive that was removed, where I was given the option to select the multipath for the 2TB SAS disk I had inserted. I was given a warning that it would invalidate my key (I assume it means a new one would be generated, which I would have to download to be able to recover the pool if ever needed as it never went offline), continued on and it resilvered in a relatively short time (Not a whole lot of data on it) and came back to healthy, now with the one multipathed disk.

I'm going to test it again, this time with shutting down the array after replacing the disk and also comparing the two downloaded keys, but I think this is the proper solution, as it's functionally not much different from replacing a failing/failed drive - if one couldn't replace a failing/failed drive due to the encryption, that would be pretty useless.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Last update on this topic - after my testing with the 2TB drives, I made sure to get full backups of all of the data on the volume in question, and after that was complete, I took the plunge and took the first drive offline. I then moved it to another FreeNAS box, created a stripe on it, destroyed the stripe, marking as new (A quick erase probably would have done the trick too), re-installed it in the original FreeNAS box in the original position, where the disk now appeared in the multipath list. I then selected the disk in the pool that was taken offline and initiated the replace, selecting the multipath for the drive that was just inserted, and kicking off the resilver. The slowest resilver ran at a rate of about 10% per hour. Once the first one was done, I moved to the next and so on. As of now, all disks have been converted over to multipath disks, and everything is happy. As each disk replace apparently invalidated the previous geli key, after the last disk was swapped and the resilver completed, I re-saved the new key.
 

TidalWave

Explorer
Joined
Mar 6, 2019
Messages
51
I have only seen one sas cable fail in 3 + years as a storage manager, but dozen and dozens of hard drives. I'm not really sure how much redundancy you are actually achieving by this migration. To me it seems like it's more work than it's worth, but if you enjoy it, go for it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have detected a few bad cables, but they were probably damaged when the server was built, not that they went bad from use. I had a SAS controller fail one time, probably from overheating. These kinds of failures are extremely rare in my experience. Drives, depending on model, fail much more frequently. In the preceding twelve months, I have only had three drives that needed replacement. I lost track of the number of drives before, but server grade drives tend to last well. I have a server at work that will probably be decommissioned later this year that has about seven years of time on it and has only had three drives replaced in that time, and nothing else. Still, that was with relatively new, good quality hardware. Your mileage may vary.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
I have only seen one sas cable fail in 3 + years as a storage manager, but dozen and dozens of hard drives. I'm not really sure how much redundancy you are actually achieving by this migration. To me it seems like it's more work than it's worth, but if you enjoy it, go for it.

I completed this long ago and it really wasn't all that painful. I went this route more for controller and expander redundancy than cable redundancy. If a controller or expander fails, the second path should keep everything up, and I've heard of situations where a failing disk flaked out in such a way that it took several disks down due to whatever it was doing on the SAS link.

Chris, you are absolutely correct. Drives do fail far more often than pretty much any other component (In my experience, Seagate has the highest failure rate), and SAS disks do seem to tend to be more reliable than SATA for whatever reason.
 
Top