SOLVED Rebalancing the pool - but how to get the /dev/disk/by-id/disk1

dxun

Explorer
Joined
Jan 24, 2016
Messages
52
On TrueNAS 12-U8, I am extending my pool of mirrored vdev and am trying to rebalance the pool as explained by the excellent article from JRS.

Here is the encrypted pool I've got right now:
892ccfvrizo81.png



My intent is to create a new pool of 3x mirrored vdevs and a mirrored metadata vdev (assume all mirrors are 2-way).

I've already replicated the contents of this pool to a new encrypted pool consisting of a mirror of new disks that have been scrubbed. The mirror has been "split" (i.e. a drive /dev/da7 had been detached from the mirror) and it's this drive that I am having trouble using to go forward (i.e. create a new pool with 2 mirrored vdevs, a stripe vdev, a mirror of metadata vdevs, replicate data from a "broken" mirrored pool to the new pool and then attach the remaining disk from the "broken" mirror to the stripe vdev in the new pool - thus making it a pool of 3 mirrored vdevs + mirrored metadata vdev).
Here's what I've tried so far:
  1. the article mentions creating a new pool but I am not sure on how to get the /dev/disk/by-id/diskX needed to create a pool. The UI doesn't allow me to create a pool with 2 mirrors, a stripe and a mirrored metadata vdev. I am also not sure if I knew how to create such a fusion pool from command line.
  2. I've tried to attach the drive /dev/da7 to the existing pool above (through command line) but zpool attach storage /dev/da7 fails with missing <new_device> specification error.
  3. I am aware of this ZFS in-place rebalancing script which seems to be working the way I would expect but I am unsure how tested it is. Does anyone have any experience with it?
Of note is that the glabel status does not return the /dev/da7 partitions at all and that both commands
Code:
camcontrol devlist
camcontrol identify /dev/da7

do return the da7 info but I have no clue how to combine that output with what I need to create a new fusion pool that my data will land onto. I suspect there is a gap in my knowledge that I can't exactly pinpoint.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I am aware of this ZFS in-place rebalancing script which seems to be working the way I would expect but I am unsure how tested it is. Does anyone have any experience with it?
That script would be the option I would use if I needed to rebalance. The logic it uses is sound and it's a little less complicated than messing around recreating datasets and washing the data between them with zfs send | recv. I have not personally tested the script, but did look at it and nothing seems fishy about it to me.

The UI doesn't allow me to create a pool with 2 mirrors, a stripe and a mirrored metadata vdev. I am also not sure if I knew how to create such a fusion pool from command line.
It does (you might need to use the force option). But don't do it that way, use the script.

Make sure you manage your snaposhots during and after the rebalancing (you'll need to destroy them all as you go as suggested in the script notes).

Use zpool list -v to see your progress (again, as suggested by the script notes).
 

dxun

Explorer
Joined
Jan 24, 2016
Messages
52
Thank you - I'll give that a try then. What is the best way to compare before and after states and confirm balancing has successful? Nevermind, the git repo README has all the info. Specifically, the zpool list -v will report on this.

EDIT - the script worked great. I took almost a full day to rebalance the 9TB pool, but it was certainly easier (and less risky for someone not really familiar with ZFS CLI) then doing the rebalancing via pool "shenanigans". I left the MD5 checksumming on.

Here is how that looked like as the pool was being expanded - I think this is looking pretty balanced. Interesting to see free space fragmentation percentage on metadata vdev _increasing_ with the rebalancing of the pool. The file count was (more/less) the same so I am not sure how to explain this.

Code:
-------------------- BEFORE -------------------
root@storage[~]# zpool list -v
NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage                                         18.3T  8.49T  9.85T        -         -     0%    46%  1.00x    ONLINE  /mnt
  mirror                                        9.08T  4.24T  4.83T        -         -     0%  46.7%      -    ONLINE
    gptid/37a127c9-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37cc6e90-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        9.08T  4.24T  4.84T        -         -     0%  46.7%      -    ONLINE
    gptid/36e61551-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37996207-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
special                                             -      -      -        -         -      -      -      -  -
  mirror                                         186G  3.04G   183G        -         -    11%  1.63%      -    ONLINE
    gptid/f0c033a7-8973-11ec-a536-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
    gptid/3501f46e-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE

-------------------- AFTER ADDING -------------------

root@storage[~]# zpool list -v
NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage                                         31.1T  8.49T  22.6T        -         -     0%    27%  1.00x    ONLINE  /mnt
  mirror                                        9.08T  4.24T  4.83T        -         -     0%  46.7%      -    ONLINE
    gptid/37a127c9-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37cc6e90-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        9.08T  4.24T  4.84T        -         -     0%  46.7%      -    ONLINE
    gptid/36e61551-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37996207-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        12.7T      0  12.7T        -         -     0%  0.00%      -    ONLINE
    gptid/a59e8b5f-aad5-11ec-b4a8-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
    gptid/a6e1ec0a-aad5-11ec-b4a8-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
special                                             -      -      -        -         -      -      -      -  -
  mirror                                         186G  3.04G   183G        -         -    11%  1.63%      -    ONLINE
    gptid/f0c033a7-8973-11ec-a536-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
    gptid/3501f46e-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE

-------------------- AFTER REBALANCING -------------------
root@storage[~]# zpool list -v
NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage                                         31.1T  7.70T  23.4T        -         -     0%    24%  1.00x    ONLINE  /mnt
  mirror                                        9.08T  2.38T  6.70T        -         -     0%  26.2%      -    ONLINE
    gptid/37a127c9-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37cc6e90-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        9.08T  2.45T  6.63T        -         -     0%  26.9%      -    ONLINE
    gptid/36e61551-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
    gptid/37996207-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE
  mirror                                        12.7T  2.88T  9.84T        -         -     0%  22.6%      -    ONLINE
    gptid/a59e8b5f-aad5-11ec-b4a8-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
    gptid/a6e1ec0a-aad5-11ec-b4a8-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
special                                             -      -      -        -         -      -      -      -  -
  mirror                                         186G  2.71G   183G        -         -    14%  1.45%      -    ONLINE
    gptid/f0c033a7-8973-11ec-a536-ac1f6b4cdef2      -      -      -        -         -      -      -      -    ONLINE
    gptid/3501f46e-4f5a-11ec-a574-000f530c4484      -      -      -        -         -      -      -      -    ONLINE


This is now a 4x10 TB, 2x14 TB pool of Exos drives. I am doing the post-balancing scrub and after adding the two 14 TB drives and changing the CPUs from 2x E5-2640 v3 (2x8 cores) to 2x E5-2623 v4 (2x4 cores), the scrub operation duration is _cut in half_ (from ~5.5 hours to 3.1 hours)!
Needless to say, I am very surprised with this significant improvement - to the extent of being shocked. Despite losing 50% (!) of cores, that extra mirrored vdev and faster memory speed (1866 -> 2133 MHz) improved scrubbing that much?! Incredible.
 
Last edited:
Top