best way to upgrade / replace pool to have less vdevs

melbournemac

Dabbler
Joined
Jan 21, 2017
Messages
20
Hi,

Expect to reach 80% utilisation in one of the storage pools late this year. Thought I'd better start planning on how to address. The pool consists of 3 vdevs with each vdev consisting of a mirrored pair of 6TB WD Red (CMR) drives. The pool is mainly used for SMB shares, but also contains the system dataset and storage for one VM (running under iohyve).

Given the case is getting full and 12 & 14TB drives are now available, was thinking of reducing the # vdevs in the pool, instead of just increasing the capacity of each vdev or adding another vdev.

Having only ever increased disk capacity or increased the number of vdevs in a pool, I thought it pertinent to ask some questions
  • Is it still true that you cannot remove a vdev from a pool?
  • is the recommended approach still to (taken from here)?
    • Add additional disks (burn in / long smart test)
    • Create new pool comprising of new disks
    • Create snapshot of existing pool
    • perform zfs send - receive of snapshot to the new pool
    • shutdown NFS / SMB etc to stop writes
    • take another snapshot of existing pool and incrementally send to new pool
    • Mark existing pool as read only
    • Export pools
    • Import new pool, renaming it to the existing pool's name
    • start NFS / SMB services
Appreciate input / recommendations on best way forward.

thanks,

Steve

Current Specs (I know it's very old hardware - expect to update motherboard, CPU, RAM sooner rather than later)
Version: TrueNAS-12.0-U1.1 (as of 27-Jan-2021)
Motherboard: Intel S5500HCV
CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Memory: 24GB DDR3 ECC RAM (6 x 4GB)
HBA: LSI SAS2008-8I SATA 9211-8i
Boot: Samsung SSD 750 EVO 120GB
Pool1: 3 vdevs, each a mirrored pair of 6TB WD Red (CMR)
Pool2: 1 vdev, single disk (not important / can be destroyed without impact)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is it still true that you cannot remove a vdev from a pool?
No.

If the vdev you want to remove is a mirror and your pool is up to date, you should be able to use zpool remove to get the data onto the other VDEVs and take it out of the pool.
zpool remove testpool mirror-0




Code:
     zpool remove [-np] pool device...

             Removes the specified device from the pool.  This command currently

             only supports removing hot spares, cache, log devices and mirrored

             top-level vdevs (mirror of leaf devices); but not raidz.


             Removing a top-level vdev reduces the total amount of space in the

             storage pool.  The specified device will be evacuated by copying

             all allocated space from it to the other devices in the pool.  In

             this case, the zpool remove command initiates the removal and

             returns, while the evacuation continues in the background.  The

             removal progress can be monitored with zpool status. This feature

             must be enabled to be used, see zpool-features(5)


             A mirrored top-level device (log or data) can be removed by

             specifying the top-level mirror for the same.  Non-log devices or

             data devices that are part of a mirrored configuration can be

             removed using the zpool detach command.


             -n      Do not actually perform the removal ("no-op").  Instead,

                     print the estimated amount of memory that will be used by

                     the mapping table after the removal completes.  This is

                     nonzero only for top-level vdevs.


             -p      Used in conjunction with the -n flag, displays numbers as

                     parsable (exact) values.


     zpool remove -s pool

             Stops and cancels an in-progress removal of a top-level vdev.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
is the recommended approach still to (taken from here)?
I have used the method you linked to several times when re-configuring a pool to have smaller number of vdevs, or different number of drives per vdev. It does work as advertised.
Time to complete the initial zfs send and receive can be quite long depending on link speed and amount of data. I had a system at work that took 45 days because of the amount of data being transferred.

The directions you pointed to also make the assumption that both pools are in the same system. You can send from one system and receive on another if you add some networking info.
See this post if that is something you might need:
 
Top