I have a pool that has a VDEV that I am attempting to resize by replacing a single drive at a time. It is a 4x12TB Z1 VDEV that where I have replaced all the 12TB drives with 16TB drives. It is in a pool that has three z1 VDEVS (one 16TBx4, one 12TBx4, and this one, which is a 12TBx4 that I'm trying to upgrade to 16TBx4).
I offlined each drive prior to replacement. Removed the offlined drive (I didn't have a spare slot). I slotted the 16TB drive and triggered the replace process. TrueNAS put the VDEV into degraded status each time, but resilvered and the VDEV was healthy again. I repeated this three more times.
At the end of the process I expected that the pool size would increase. But after the 4th drive finished resilvering the pool size did not change.
I am running Dragonfish-24.04-RC.1.
I've done a fair amount of research on others with the same issue. So I've tried the following:
1. I've rebooted TrueNAS Scale
2. I've confirmed autoexpand is turned on with the zpool get autoexpand command
3. I've turned off and turned back on autoexpand with the zpool set autoexpand=off and on commands
4. I've used the zpool online -e command for each of the drives in the VDEV
None of these solutions have resolved my issue. The one clue that I don't understand is why the GUI is showing two VDEVs with the 16TB drives (14.55TiB), but when I run a zpool list -v bigpool command I see the following:
My research hasn't uncovered anything else to try. Any suggestions? Might this be a Dragonfish bug?
I offlined each drive prior to replacement. Removed the offlined drive (I didn't have a spare slot). I slotted the 16TB drive and triggered the replace process. TrueNAS put the VDEV into degraded status each time, but resilvered and the VDEV was healthy again. I repeated this three more times.
At the end of the process I expected that the pool size would increase. But after the 4th drive finished resilvering the pool size did not change.
I am running Dragonfish-24.04-RC.1.
I've done a fair amount of research on others with the same issue. So I've tried the following:
1. I've rebooted TrueNAS Scale
2. I've confirmed autoexpand is turned on with the zpool get autoexpand command
3. I've turned off and turned back on autoexpand with the zpool set autoexpand=off and on commands
4. I've used the zpool online -e command for each of the drives in the VDEV
None of these solutions have resolved my issue. The one clue that I don't understand is why the GUI is showing two VDEVs with the 16TB drives (14.55TiB), but when I run a zpool list -v bigpool command I see the following:
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bigpool 145T 113T 32.9T - - 15% 77% 1.00x ONLINE /mnt raidz1-0 58.2T 44.2T 14.0T - - 15% 76.0% - ONLINE bd75b86c-cac3-4dcb-b5fe-5dca73e3e7d9 14.6T - - - - - - - ONLINE 6c63cbbb-824b-422e-aa4c-c255ec5828fc 14.6T - - - - - - - ONLINE 6185f4f0-4ce7-4316-932c-c0468926d60f 14.6T - - - - - - - ONLINE e69dfd88-4cac-4fd1-b209-086fe5c41e4c 14.6T - - - - - - - ONLINE raidz1-1 43.6T 34.1T 9.51T - - 16% 78.2% - ONLINE 2c876bc1-e25c-4a1e-bbb4-05e137db5f42 10.9T - - - - - - - ONLINE 30fab442-6717-4d7e-b439-ea4a31c564aa 10.9T - - - - - - - ONLINE 959f59b3-f4aa-401e-b9b9-1c1ba38cd545 10.9T - - - - - - - ONLINE 65547adc-d97b-414e-af3a-2830cd3b6d22 10.9T - - - - - - - ONLINE raidz1-2 43.6T 34.2T 9.43T - - 15% 78.4% - ONLINE e973d868-6056-45d7-b2d9-d7af129dc4f0 10.9T - - - - - - - ONLINE 6bcac0ea-4128-4a11-946c-ef9028b4afa0 10.9T - - - - - - - ONLINE 49650b25-3f74-49b7-a6a3-c5a02179233f 10.9T - - - - - - - ONLINE 7202cd7d-8f7b-4d1b-a8f8-bd4729b4c8b6 10.9T - - - - - - - ONLINE
My research hasn't uncovered anything else to try. Any suggestions? Might this be a Dragonfish bug?