How to upgrade all drives in TrueNAS Scale for more storage

Knight81

Cadet
Joined
Feb 20, 2024
Messages
1
I have been bouncing around this forum for a while now and cannot seem to find a solution to my problem, if one of you out there know of a forum post where this is discussed I would appreciate pointing me in the right direction.
I had previously built a ZFS pool ( 1 x RAISZ1 | 12 wide | 3.63 TiB) giving me 38.67TiB usable capacity. As a matter of use case I attempted to upgrade all these drives with larger versions (moving from 4TB drives to 6TB drives). I was able to go through all the hardware replacing and have everything swapped out. But, at this point the array still shows the old capacity still. I successfully resilvered every time I replaced a drive without issue. I found the following post for TrueNAS Core and attempted the steps outlined here to no avail:
So right now I am stuck. It is not a critical issue as I am just trying to test, but later on in life I would absolutely like to have the ability to resize an array by upgrading drive sets.
From the GUI I have also attempted to run the "Expand" procedure to "Expand the pool to fit all available disk space." but I always receive an error message.
[EFAULT] Command partprobe /dev/sdb failed (code 1): Error: Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
Any help is appreciated.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If I remember correctly, their is a bug in TrueNAS SCALE that makes what you did fail. (Your data is safe, just the drive expansion failed.)

I don't remember the fix. Perhaps someone else will either know or can link to a forum post with the fix.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If I remember correctly, their is a bug in TrueNAS SCALE that makes what you did fail.
There is indeed. Although "12-wide RAIDZ1" and upgrading 12 disks from 4 TB to 6 TB both make my eyes twitch.

It's fairly straightforward to fix this from the CLI:
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
There is indeed. Although "12-wide RAIDZ1" and upgrading 12 disks from 4 TB to 6 TB both make my eyes twitch.

It's fairly straightforward to fix this from the CLI:
Is this bug permanent through upgrades/migrations?
If a person currently has a system that was installed with and is running on an affected version (23.10.1) of Cobia, then will later upgrades of the system to a (possible) version not affected, or migrating to Dragonfish, eliminate the bug?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I think you need to distinguish between the bug and its effects on your disks. iX have said that the bug will be fixed in 23.10.2. But if you've replaced disks while the bug was in place, I'd be very surprised if upgrading automatically fixes the partition tables on those disks.
 

PhilD13

Patron
Joined
Sep 18, 2020
Messages
203
I guess the question was not worded very well, but yes that was the question.
 
Top