Migrating 6 disks pool of mirrors - best way to do so?

llavalle

Cadet
Joined
Jan 25, 2019
Messages
3
Hey guys,

So I have this kinda old Truenas box that I've been keeping up to date software wise but from a hardware perspective, it's pretty much at end of life. I have 2 pools, a mirror of 2 SSDs, this one is fine but the other one, the main one, has disk that have over 11 years of power on hours (and 2 of them have VERY noisy bearings). It's setup as a stripe of 3 mirrors, each with 2 disks. All WDC RED (CMR)

Code:
sudo zpool status
  pool: MainShare
 state: ONLINE
  scan: scrub repaired 0B in 06:49:12 with 0 errors on Sun Feb 11 06:49:15 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        MainShare                                 ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            fd49ea50-6836-11e4-882a-74d435ea17c5  ONLINE       0     0     0
            fdb3c8b2-6836-11e4-882a-74d435ea17c5  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            1f4b0ec7-052c-11e8-9e5e-74d435ea17c5  ONLINE       0     0     0
            fe7f33d6-6836-11e4-882a-74d435ea17c5  ONLINE       0     0     0
          mirror-2                                ONLINE       0     0     0
            fedf9d6b-6836-11e4-882a-74d435ea17c5  ONLINE       0     0     0
            ffa53ac7-6836-11e4-882a-74d435ea17c5  ONLINE       0     0     0

errors: No known data errors


Looking at my options, I could definitely just swap them all 1 by 1 with newer disks - buying 6 disks... but looking at the prices of HDDs, I could simply remove the 6 disks and replace with a single mirror of way higher capacity drives, like WD Red Plus 14TB and it's almost 50% cheaper & consumes less energy. It *might* be slower (would need to check benchmarks) but I'm using this as a destination for backups and running 1gbit networking right now which is definitely slower than a WD Red Plus.

Now here's the thing : my motherboard + SAS cards are maxed out. Can't add any new drive in to just replicate everything so I'm unsure how I could, like, backup everything, destroy the pool and recreate with the new drives.

Also problematic is I have all of my apps + 2 small VMs sitting on this pool.

Thought of a few options, unsure what could work :
1-Remove one of the 3tb drive, run in degraded mode on this pool... plug 1 of the new disk in, replicate everything... rip out the leftovers from the original pool, put the 2nd new disk in and extend the pool to it... (or remove 2 disks from the first pool...)

2-Use a USB to Sata enclosure to connect 1 of the new drive directly to the Truenas box, replicate and then kill the original pool, put the new drives in and run with it?

3-Seems convoluted but I feel like I could possibly do this without having to migrate / reconfigure anything :
-Setup Trunas Scale in a VM on my desktop, creating the ZPool on the 2 new disks (connected over SATA or with the USB adapter)
-Replicate everything to this new Truenas scale instance
-once done, rip the 6 disk ZPool out and install the 2 disk pool in... as a replacement for the existing one... <-- Is that even possible? Keeping shares, snapshots, apps, etc

Anyway, open to suggestions... My other idea was that the hardware is really old and I could just build a new box entirely with the new disks :P Finding a case for 2 disks is way easier than 6 with hotswap... and I won't need the SAS controller, etc etc.
 

Krill

Cadet
Joined
Dec 26, 2023
Messages
3
Pragmatic option: why not export the two SSD pool, free up two connections, then add a new two HDD mirror and replicate from the old 6 disk pool. Once the replication and other tests are done, export the 6 drive pool, and then import the two SSD pool. This way there is no network usage which removes that bottleneck, and then you can rebuild a new box at your leisure.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Skip all that and do this:
  • Replace 1 disk in "mirror-0" with the larger disk, await resilver complete
  • Replace 2nd disk in "mirror-0" with another larger disk, await resilver complete
  • Perform the GUI vDev removal of "mirror-2", wait til complete. This frees up 2 SATA ports.
  • Perform the GUI vDev removal of "mirror-1", wait til complete. This frees up another 2 SATA ports.
All this is done "live". Except for the pool I/O slow down to re-mirror in the first 2 steps, and migrate data in the last 2 steps, everything works as normal. ZFS was intended to have full uptime. Very little is done with the pool unavailable.

If their is a "mirror" vDev that has noticeably older disks, that can be the target for the first 2 steps above. That way we get them out of the picture as quickly as possible.

With such new large disks, you might want to go 3 way mirror, instead of staying with 2 way mirrors. Now that you have freed up SATA ports, that is possible.
 

llavalle

Cadet
Joined
Jan 25, 2019
Messages
3
  • Replace 1 disk in "mirror-0" with the larger disk, await resilver complete
  • Replace 2nd disk in "mirror-0" with another larger disk, await resilver complete
  • Perform the GUI vDev removal of "mirror-2", wait til complete. This frees up 2 SATA ports.
  • Perform the GUI vDev removal of "mirror-1", wait til complete. This frees up another 2 SATA ports.
Interesting, didn't know this was an option (3rd and 4th step) ! Was aware of the step 1 and 2.. but was under the impression that once you added a vDev to a pool, you were stuck with that.

Definitely the best option so far. As for disk age, 5 of the 6 disks are > 10yo (powered on 24/7). Only one is 7yo (warranty return in the first 3 years)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Interesting, didn't know this was an option (3rd and 4th step) ! Was aware of the step 1 and 2.. but was under the impression that once you added a vDev to a pool, you were stuck with that.
Nope! Pools made up of exclusively stripe/mirror vdevs can generally have top-level vdevs removed. (Mismatch ashift sizes can break this, but ashift=12 has been the default on TrueNAS for some time now.)

As soon as you put RAIDZ in the mix anywhere though, vdev removal becomes impossible.
 
Top