Moving some disks to additional server - keep online? Offline vs detach?

SAK

Dabbler
Joined
Dec 9, 2022
Messages
20
I searched but could not find the answer to this. Perhaps I didn't look well enough, but hopefully I am not repeating a post.

Let's say I have multiple TrueNAS servers. I have pools with mirrored vdevs. Say for instance a 6-disk 3-way mirror or an 8-disk 4-way. I want to take 2 disks out of one TrueNAS server and bring the same pool back up online in another. I realize I could shut the server down. However...is there a safe/reliable way to do this without taking the server down?

Just curious. If all the datasets are locked, could I "Offline" two of the disks? I have never used Offline before. Would it be required to export the pool and remove it from the first system, then take out a couple disks and reimport the pool back into both servers?

Thank you ahead of time. I was unclear about Offline vs Detach and whether the former would work, or if the pool needs to be exported.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Why wouldn't you use the ZFS command below?
zpool split POOL NEWPOOL DEV1 DEV2 DEV3 ...
See manual page for details.

I don't know if their is a GUI equivalent...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

SAK

Dabbler
Joined
Dec 9, 2022
Messages
20
Why wouldn't you use the ZFS command below?
zpool split POOL NEWPOOL DEV1 DEV2 DEV3 ...
See manual page for details.

I don't know if their is a GUI equivalent...
Very cool, indeed! Wow thank you for the information. Do you know if this for sure works with striped mirror storage too? I did some googling and found examples of simple mirrors, but haven't seen striped mirror examples so just wanted to ask before trying. Thank you for your help and expertise! Hopefully more people will find this useful information!

 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The following description from the zpool split manual page implies that multiple, (Mirrored), vDevs are supported;
DESCRIPTION
Splits devices off pool creating newpool. All vdevs in pool must be mirrors and the pool must not be in the
process of resilvering. At the time of the split, newpool will be a replica of pool. By default, the last
device in each mirror is split from pool to create newpool.

The optional device specification causes the specified device(s) to be included in the new pool and, should
any devices remain unspecified, the last device in each mirror is used as would be by default.
Now I don't know if you can break a 4 way vDev Mirror into 2 - 2 way vDev Mirrors... But, based on the manual page, removing 1 disk from a 2 way or higher is supported.

One comment. ZFS almost certainly won't allow you to damage your pool. A scrub and backup before hand would be prudent, but strictly speaking not necessary.
 

SAK

Dabbler
Joined
Dec 9, 2022
Messages
20
I didn't want to risk it, so I just locked the encrypted pools and yanked the disks. I suppose the same thing could be accomplished by shutting off sharing so that nothing is writing to them at the time. So maybe someone else will want to try the striped mirror split sometime
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Uh, what you did seems to have been far more of a risk. Especially if you did not shutdown the server, (which would export the pool). The yanked disks likely will need recovery run, meaning they won't import as a clean, new pool. And you may have lost the most recent data on those yanked disks, (though unlikely).

The Zpool split software was written more than 10 years. OpenZFS uses some pretty significant automated testing. Anytime someone makes a change to code, (not in-line documentation or manual page), the extensive automated tests are run. Which I will assume includes the pool split function.

Now do I believe the OpenZFS automated testing is perfect?
No, but it is FAR more than many projects have.
Any new feature, like dRAID, is REQUIRED to have complete set of automated tests, with all variations as part of it's release.

Should someone investigate to make sure striped, split mirroring is part of the automated tests?
Probably, but I don't have time...
 

SAK

Dabbler
Joined
Dec 9, 2022
Messages
20
I misspoke above. I actually did do a system shutdown and then pull the disks. Then detached the missing disks when back online.

I am wondering, though, could I have stopped disk usage by shutting down shares/locking and then offlined the disks? Would this export them without the need to completely shutdown the system? Then replace them? I haven't used offline feature, so that had me wondering. In case anyone else finds this thread - this information will likely be useful.

Thank you for taking the time to share all of your knowledge with everyone and for staying in middle earth.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Glad you pulled the disks with the power off.

No, off-lining disks on a live pool, in sequence will NOT produce a clean detached mirror. This is because ZFS does things in the background, so the first disk off-lined might have ZFS transaction 1000. But, the next disk off-lined might be ZFS transaction 1001. Not capable of being put back together easily.

Basically, ZFS attempts to have data integrity at all times. However, this limits some things like what what you suggested.

The old Linux MD-RAID and Solaris DiskSuite were primitive enough that what you said would probably work. Not likely with ZFS.
 
Top