Problems with VDEV autoexpand on Dragonfish or user error?

eJonny

Cadet
Joined
Mar 28, 2024
Messages
4
I have a pool that has a VDEV that I am attempting to resize by replacing a single drive at a time. It is a 4x12TB Z1 VDEV that where I have replaced all the 12TB drives with 16TB drives. It is in a pool that has three z1 VDEVS (one 16TBx4, one 12TBx4, and this one, which is a 12TBx4 that I'm trying to upgrade to 16TBx4).

I offlined each drive prior to replacement. Removed the offlined drive (I didn't have a spare slot). I slotted the 16TB drive and triggered the replace process. TrueNAS put the VDEV into degraded status each time, but resilvered and the VDEV was healthy again. I repeated this three more times.

At the end of the process I expected that the pool size would increase. But after the 4th drive finished resilvering the pool size did not change.

I am running Dragonfish-24.04-RC.1.

I've done a fair amount of research on others with the same issue. So I've tried the following:
1. I've rebooted TrueNAS Scale
2. I've confirmed autoexpand is turned on with the zpool get autoexpand command
3. I've turned off and turned back on autoexpand with the zpool set autoexpand=off and on commands
4. I've used the zpool online -e command for each of the drives in the VDEV

None of these solutions have resolved my issue. The one clue that I don't understand is why the GUI is showing two VDEVs with the 16TB drives (14.55TiB), but when I run a zpool list -v bigpool command I see the following:
Code:
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bigpool                                    145T   113T  32.9T        -         -    15%    77%  1.00x    ONLINE  /mnt
  raidz1-0                                58.2T  44.2T  14.0T        -         -    15%  76.0%      -    ONLINE
    bd75b86c-cac3-4dcb-b5fe-5dca73e3e7d9  14.6T      -      -        -         -      -      -      -    ONLINE
    6c63cbbb-824b-422e-aa4c-c255ec5828fc  14.6T      -      -        -         -      -      -      -    ONLINE
    6185f4f0-4ce7-4316-932c-c0468926d60f  14.6T      -      -        -         -      -      -      -    ONLINE
    e69dfd88-4cac-4fd1-b209-086fe5c41e4c  14.6T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                43.6T  34.1T  9.51T        -         -    16%  78.2%      -    ONLINE
    2c876bc1-e25c-4a1e-bbb4-05e137db5f42  10.9T      -      -        -         -      -      -      -    ONLINE
    30fab442-6717-4d7e-b439-ea4a31c564aa  10.9T      -      -        -         -      -      -      -    ONLINE
    959f59b3-f4aa-401e-b9b9-1c1ba38cd545  10.9T      -      -        -         -      -      -      -    ONLINE
    65547adc-d97b-414e-af3a-2830cd3b6d22  10.9T      -      -        -         -      -      -      -    ONLINE
  raidz1-2                                43.6T  34.2T  9.43T        -         -    15%  78.4%      -    ONLINE
    e973d868-6056-45d7-b2d9-d7af129dc4f0  10.9T      -      -        -         -      -      -      -    ONLINE
    6bcac0ea-4128-4a11-946c-ef9028b4afa0  10.9T      -      -        -         -      -      -      -    ONLINE
    49650b25-3f74-49b7-a6a3-c5a02179233f  10.9T      -      -        -         -      -      -      -    ONLINE
    7202cd7d-8f7b-4d1b-a8f8-bd4729b4c8b6  10.9T      -      -        -         -      -      -      -    ONLINE


My research hasn't uncovered anything else to try. Any suggestions? Might this be a Dragonfish bug?
 

Attachments

  • TrueNAS autoexpand problem.jpg
    TrueNAS autoexpand problem.jpg
    103.7 KB · Views: 146

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

eJonny

Cadet
Joined
Mar 28, 2024
Messages
4
I think you are right. I have run the lsblk command and sorted the drives in the sequence of the VDEVs (the first 4 are my 1st 16TBx4 which is working fine, the second 4 are the new 16TBx4 which won't expand, and the last 4 are the 12TBx4 which is working fine).

Code:
lsblk -o name,partuuid,fstype,size
NAME        PARTUUID                             FSTYPE             SIZE
sdf                                                                14.6T
├─sdf1      0a939918-cd97-4c82-aa42-efc7cd1c0ced linux_raid_member    2G
│ └─md122                                                             2G
│   └─md122                                      swap                 2G
└─sdf2      6185f4f0-4ce7-4316-932c-c0468926d60f zfs_member        14.6T
sdg                                                                14.6T
├─sdg1      f5f59738-5281-4f02-9f58-f904b0a8f8dc linux_raid_member    2G
│ └─md124                                                             2G
│   └─md124                                      swap                 2G
└─sdg2      e69dfd88-4cac-4fd1-b209-086fe5c41e4c zfs_member        14.6T
sdh                                                                14.6T
├─sdh1      ae763085-19f4-4ea1-9a4e-05be56582154 linux_raid_member    2G
│ └─md123                                                             2G
│   └─md123                                      swap                 2G
└─sdh2      6c63cbbb-824b-422e-aa4c-c255ec5828fc zfs_member        14.6T
sdo                                                                14.6T
├─sdo1      1d32ed54-230c-419e-b5f0-fd02f7aa67b3 linux_raid_member    2G
│ └─md124                                                             2G
│   └─md124                                      swap                 2G
└─sdo2      bd75b86c-cac3-4dcb-b5fe-5dca73e3e7d9 zfs_member        14.6T


sdb                                                                14.6T
├─sdb1      42f77ae2-f0ec-4181-828f-8f697c62630b                      2G
└─sdb2      30fab442-6717-4d7e-b439-ea4a31c564aa zfs_member        10.9T
sdc                                                                14.6T
├─sdc1      089a386b-ea4b-4928-a98a-b0d075faf196                      2G
└─sdc2      959f59b3-f4aa-401e-b9b9-1c1ba38cd545 zfs_member        10.9T
sdd                                                                14.6T
├─sdd1      e4bfec43-a190-4554-b74e-4a5aefd4a564                      2G
└─sdd2      2c876bc1-e25c-4a1e-bbb4-05e137db5f42 zfs_member        10.9T
sde                                                                14.6T
├─sde1      ee0d4dae-dc8c-42ad-b71f-474aaa5b2268                      2G
└─sde2      65547adc-d97b-414e-af3a-2830cd3b6d22 zfs_member        10.9T


sdj                                                                10.9T
├─sdj1      795f9b30-346e-428c-b261-45fbb0e6edf5 linux_raid_member    2G
│ └─md125                                                             2G
│   └─md125                                      swap                 2G
└─sdj2      6bcac0ea-4128-4a11-946c-ef9028b4afa0 zfs_member        10.9T
sdl                                                                10.9T
├─sdl1      5bd69252-33ec-472d-b0ac-76eb66fb4bba linux_raid_member    2G
│ └─md125                                                             2G
│   └─md125                                      swap                 2G
└─sdl2      7202cd7d-8f7b-4d1b-a8f8-bd4729b4c8b6 zfs_member        10.9T
sdn                                                                10.9T
├─sdn1      33aa53d8-230e-4844-ae4f-bcea0f09e2ab linux_raid_member    2G
│ └─md123                                                             2G
│   └─md123                                      swap                 2G
└─sdn2      e973d868-6056-45d7-b2d9-d7af129dc4f0 zfs_member        10.9T
sdq                                                                10.9T
├─sdq1      556ae5a8-a2fe-4307-8755-336214cefff9 linux_raid_member    2G
│ └─md122                                                             2G
│   └─md122                                      swap                 2G
└─sdq2      49650b25-3f74-49b7-a6a3-c5a02179233f zfs_member        10.9T


Based on being able to search on "lsblk" and "expand" I found this thread: https://www.truenas.com/community/t...ty-not-expanding-23-10-1-cobia-solved.116337/.

Based on the results of my lsblk do you think I've run into the same partitioning SNAFU?

Do you recommend the remediation steps from that thread:
1. Reinstalled TrueNAS-SCALE-22.12.4.2.
2. Offline the disk,
3. Readded disk and resilver.

I really need to downgrade my install?
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I really need to downgrade my install?
It seems like you don't, refer to @danb35 write up here:

 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
do you think I've run into the same partitioning SNAFU?
It looks like it; /dev/sd{bcde} are partitioned as though they're 12 TB disks rather than 16 TB.

Run parted /dev/sdb resizepart 2 100% and repeat for the remaining affected drives.
 

eJonny

Cadet
Joined
Mar 28, 2024
Messages
4
It looks like it; /dev/sd{bcde} are partitioned as though they're 12 TB disks rather than 16 TB.

Run parted /dev/sdb resizepart 2 100% and repeat for the remaining affected drives.

Looks like you've literally written the guide to avoid this problem. I sincerely appreciate the help resolving. In your guide to manual drive replacement you mention creating the swap partition. It looks like there is a 2G partition, but is it just not named "swap"?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It looks like there is a 2G partition, but is it just not named "swap"?
There is the 2G partition, but not only is it not named "swap", it isn't of the correct filesystem type. I don't see that there's a need to bother with it, though.
 

eJonny

Cadet
Joined
Mar 28, 2024
Messages
4
There is the 2G partition, but not only is it not named "swap", it isn't of the correct filesystem type. I don't see that there's a need to bother with it, though.

Thanks again for all your help. I ran the
Code:
parted /dev/sdb resizepart 2 100%
for each of the drives and viola the storage pool now has double digit TB more space. But I suspect I'll always feel like this VDEV was abused by TrueNAS and hope that they don't develop some type of trauma related problem in the future.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I hope not--it's not exactly like you can have it talk with a therapist. But I had to do this myself a few months back and haven't seen any ill effects yet.
 
Top