SOLVED No Capacity Expansion After Disk Upgrade

Bmck26

Dabbler
Joined
Dec 9, 2013
Messages
48
Hello everyone. It's been a while since the last time I posted anything on the forums mainly b/c I haven't run into any major issues that I haven't been able to solve myself in a long time. My current config is in my signature below.

However, I have recently encountered a problem while updating the drives in my main storage pool. The pool was made up of 6x 6TB Iron Wolf drives in RAIDZ2. I replaced them with 12TB drives of the same model. I took each drive offline and resilvered the new replacement drive one at a time. That all went smoothly but the capacity of the pool didn't change after the last disk finished resilvering. I ran a scrub on it and tried to use the Expand function in the GUI which said it finished successfully but again the capacity did not change. I rebooted the system a couple of times with no effect.

Everything else is working fine. The drives show up in the GUI with no errors and report 10.91TiB capacity for each drive. I don't know if I'm missing something but I didn't see anything else in the documentation on managing pools. This is an old VDEV that was configured back in 2018 so it's gone through a lot of FreeNAS updates. I did a conversion from TrueNAS Core to Scale a couple of months ago which is on the latest version. I would appreciate any help figuring out why the capacity didn't change with the new drives installed.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Please, post the output of zpool status and camcontrol devlist
 

Bmck26

Dabbler
Joined
Dec 9, 2013
Messages
48
Zpool Status Output

Code:
root@truenas[~]# zpool status
  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:20 with 0 errors on Fri Dec  1 03:45:22 2023
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdk3      ONLINE       0     0     0

errors: No known data errors

  pool: const
 state: ONLINE
  scan: scrub repaired 0B in 00:22:44 with 0 errors on Sun Nov 26 00:22:46 2023
config:

        NAME                                    STATE     READ WRITE CKSUM
        const                                   ONLINE       0     0     0
          a3856bc7-91ef-11ed-b6c7-a0369f1fbcc0  ONLINE       0     0     0
          a38e13c3-91ef-11ed-b6c7-a0369f1fbcc0  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 05:39:58 with 0 errors on Sun Dec  3 19:14:03 2023
config:

        NAME                                      STATE     READ WRITE CKSUM
        tank                                      ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            590fc607-328c-4e24-b7e4-22a04394dd0b  ONLINE       0     0     0
            88964194-83b5-4d5b-836a-325d82fbbce6  ONLINE       0     0     0
            159706ca-5f77-4167-b50d-e4b90cd1a70a  ONLINE       0     0     0
            dafb4a76-f39c-45a0-96a5-87ce5d9db059  ONLINE       0     0     0
            ff876ec8-f9d1-4973-abd2-31c99a4027a9  ONLINE       0     0     0
            c7f18ffc-fb64-48fd-9aae-360977f01798  ONLINE       0     0     0

errors: No known data errors

  pool: tank2
 state: ONLINE
  scan: scrub repaired 0B in 00:12:13 with 0 errors on Sun Nov 26 00:12:15 2023
config:

        NAME                                      STATE     READ WRITE CKSUM
        tank2                                     ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            d0fec0d1-a270-11ed-bc11-a0369f1fbcc0  ONLINE       0     0     0
            ecab2c56-a27c-11ed-9c04-a0369f1fbcc0  ONLINE       0     0     0
          mirror-1                                ONLINE       0     0     0
            bce0322a-a27e-11ed-9c04-a0369f1fbcc0  ONLINE       0     0     0
            db5309c6-a290-11ed-9c04-a0369f1fbcc0  ONLINE       0     0     0

errors: No known data errors

I tried to run camcontrol devlist but it returned command not found. I don't know if there is another command that's supposed to be used on Scale for the same purpose.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
@Bmck26 command not found may also indicate that you are lacking privileges. Did you run the command with elevated privileges, i.e. sudo camcontrol or became root before sudo -s?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
camcontrol is a BSD command. Try parted -l
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
Hello,
There is also a property:
Code:
root@tn-xeond[~]# zpool get autoexpand tank-big
NAME      PROPERTY    VALUE   SOURCE
tank-big  autoexpand  on      local

Best Regards,
Antonio
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
@Etorix is right... Using TrueNAS Core here and I gave you the BSD command... Please, post parted -l (I trust him that this will be the equivalent), the goal being to ask the operating system to list all the drives it sees.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There's a bug in SCALE 23.10 in partitioning the disks; it's supposed to be fixed in 23.10.1 (not to be confused with 23.10.0.1). Further information in this thread:
Notably absent is a description of what should be done when you've already added a disk to your pool.
 

Bmck26

Dabbler
Joined
Dec 9, 2013
Messages
48
So based on this other thread and the bug ticket that I can't read b/c I have to be given permission from an admin to see it, I'm going to need to destroy the pool and create a new one to have access to all the available space. I was hoping to avoid doing that since it's very time-consuming to create a local backup and restore everything after creating a new pool.

Code:
root@truenas[~]# parted -l
Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sda: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sdb: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Model: ATA ST4000NM0033-9ZM (scsi)
Disk /dev/sdc: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  4001GB  3999GB  zfs


Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sdd: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Model: ATA ST4000NM0033-9ZM (scsi)
Disk /dev/sde: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  4001GB  3999GB  zfs


Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sdf: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Model: Linux device-mapper (crypt) (dm)
Disk /dev/mapper/md124: 2144MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2144MB  2144MB  linux-swap(v1)


Model: Linux device-mapper (crypt) (dm)
Disk /dev/mapper/md123: 2144MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2144MB  2144MB  linux-swap(v1)


Model: Linux device-mapper (crypt) (dm)
Disk /dev/mapper/md127: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2144MB  2144MB  linux-swap(v1)


Model: Linux device-mapper (crypt) (dm)
Disk /dev/mapper/md125: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2144MB  2144MB  linux-swap(v1)


Model: Linux device-mapper (crypt) (dm)
Disk /dev/mapper/md126: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2144MB  2144MB  linux-swap(v1)


Error: /dev/md127: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md127: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sdm: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Error: /dev/md125: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md125: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: Unknown (unknown)
Disk /dev/zd0: 5906GB
Sector size (logical/physical): 512B/16384B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                          Flags
 1      17.4kB  16.8MB  16.8MB               Microsoft reserved partition  msftres
 2      16.8MB  5906GB  5906GB  ntfs         Basic data partition          msftdata


Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sdk: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      2097kB  3146kB  1049kB                     bios_grub, legacy_boot
 2      3146kB  540MB   537MB   fat32              boot, esp
 3      540MB   68.7GB  68.2GB  zfs


Error: /dev/md123: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md123: 2144MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: ATA PNY CS900 1TB SS (scsi)
Disk /dev/sdi: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  1000GB  998GB   zfs


Model: ATA ST12000VN0008-2Y (scsi)
Disk /dev/sdg: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  5999GB  5999GB  zfs
 2      5999GB  6001GB  2147MB                     swap


Error: /dev/md126: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md126: 2144MB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Model: ATA PNY CS900 1TB SS (scsi)
Disk /dev/sdl: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  1000GB  998GB   zfs


Error: /dev/md124: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md124: 2144MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: ATA PNY CS900 1TB SS (scsi)
Disk /dev/sdj: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  1000GB  998GB   zfs


Model: ATA PNY CS900 1TB SS (scsi)
Disk /dev/sdh: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      65.5kB  2148MB  2147MB
 2      2148MB  1000GB  998GB   zfs
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
It might be possible to replace drives, one at a time through the CLI and doing the partitioning manually, but the help of a Linux expert is required for the right partitioning commands.

(CORE user watching a SCALE bug with hands firmly clasped behind his back here…)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I'm going to need to destroy the pool and create a new one to have access to all the available space
No reason at all to expect that. What's likely to be needed is to, one at a time, wipe the partition table from each of the new disks, write a new partition table, and then re-replace that disk. This could be done today at the CLI. What's unknown at this point is to what extent, if any, 23.10.1 will automate it through the GUI.

My guess is that, once 23.10.1 is released, the process is going to look like this:
  • Offline the disk in question - GUI
  • Delete the partition table from that disk: wipefs -a /dev/sdg - CLI
  • Replace the offline disk with itself - GUI
  • Repeat for each remaining disk
Hopefully 23.10.1 will include something to GUI-fy the whole process, but I don't think I'd bet on it.
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
So based on this other thread and the bug ticket that I can't read b/c I have to be given permission from an admin to see it, I'm going to need to destroy the pool and create a new one to have access to all the available space. I was hoping to avoid doing that since it's very time-consuming to create a local backup and restore everything after creating a new pool.
You may have already fixed everything, but I happened to notice this today. For months my flash pool (two mirrors vdevs of 1.75TB drives) has showed a warning that the capacities weren't matched. I only had 2.xTB of space and didn't put much thought into it since it was more than plenty for the task.

Today while fixing another issue, I decided to look into it.
1) lsblk showed one of the drives in the mirror was partitioned in a different order, and the data partition was only 1tb. These drives replaced 1tb drives, so seems that's where the partition size came from.
2) I selected the improperly partitioned drive in the GUI, selected 'Detach'
3) GUI now showing a single disk vdev and a mirror.
4) Physically removed and reinserted the incorrectly partitioned drive.
5) In the GUI, elected the single disk that wasn't in a mirror, chose "Extend" and selected the unassigned drive.
6) Resilvered in a few minutes, properly partitioned, showing the correct capacity, and I got another 600GB I didn't know I was missing :)

Hope this reaches you before you do something more complicated.
 

yeeahnick

Dabbler
Joined
Feb 2, 2020
Messages
17
Had a very similar issue. I replaced a mirror pool of 2x6TB with 2x12TB and after the resilvering the pool was still showing 6TB (autoexpand was enabled). I then tried the expand button which said successful but nothing had changed. After that I noticed I had no expand space available on the drive's partitions. I decided to proceed with detaching and reattaching the drives from the pool's VDEV one by one. As soon as the second drive started to resilver the pool capacity jumped to 12TB. Hope this can help others until the "fix" is released.
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
Had a very similar issue. I replaced a mirror pool of 2x6TB with 2x12TB and after the resilvering the pool was still showing 6TB (autoexpand was enabled). I then tried the expand button which said successful but nothing had changed. After that I noticed I had no expand space available on the drive's partitions. I decided to proceed with detaching and reattaching the drives from the pool's VDEV one by one. As soon as the second drive started to resilver the pool capacity jumped to 12TB. Hope this can help others until the "fix" is released.
Sounds like you've already got it resolved, but instead of going one by one you can just use lsblk to show which disk is partitioned wrong.
 

Bmck26

Dabbler
Joined
Dec 9, 2013
Messages
48
I updated to TrueNAS Scale to 23.10.1 earlier this week for anyone keeping track for a solution. I tried to use the "Expand" pool button on the GUI again after the update but that did not work. So, I took each drive offline one at a time, physically disconnecting the drive, reboot TrueNAS, and re-inserting the drive. Instead of reattaching the drive to the pool immediately, I created a new pool with the single drive, destroyed it, and selected the option to destroy all data on the drive. I used the replace drive option to attach it back to the main pool and start resilvering after that. I did it this way b/c I got an error message the first time I tried to resilver one of the drives that still had the original partitions on it. I did this with all 6 drives in the pool. Finally, I used the "Expand" pool button again and....Bingo! the pool doubled in capacity as expected.

I am sure someone more familiar with some of the basic hard drive tools in Linux could have erased the drives and cleared the partitions with a CLI tool but this was the easiest way for me to do it without using the CLI that I saw. As far as I know, there is not an option in the GUI to just erase/reformat an unattached disc. I could be wrong since I'm not that familiar with the newer interface. Most pools on my server were setup 6 or 7 years ago when the UI was very different.
 
Top