SOLVED Replaced Drives but pool not expanding on resilver

chronowraith

Cadet
Joined
Feb 18, 2018
Messages
4
Tearing my hair out trying to figure out what's happening here. I have a TrueNAS-12.0-U5.1 install that was previously using 8x4TB hdds in a RAIDZ2 configuration. I've recently purchased 8x8TB drives and I've gone through the process of taking 1 drive offline at a time, shutting down the system, swapping out the offline drive with a new HDD and then issuing a "replace" on the drive in question (using process outlined here: https://www.truenas.com/docs/core/storage/disks/diskreplace/). This took awhile and when I finished the 8th drive I expected the pool size to increase once the resilver finished.

All of the interaction with the zpool were through the GUI

Well, it doesn't seem to have done that, instead my pool is the same size as before.

pool.jpg


Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zpool    58T  25.7T  32.3T        -         -    10%    44%  1.00x    ONLINE  /mnt


disks.jpg


What's going on here and how can I get this pool expanded?

Thanks!
 

chronowraith

Cadet
Joined
Feb 18, 2018
Messages
4
I've done some more investigation and still no idea what's happening but I checked the partition sizes using gpart:

Code:
gpart show
=>         40  15628053088  ada0  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada1  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>       34  125045357  ada2  GPT  (60G)
         34       1024     1  freebsd-boot  (512K)
       1058          6        - free -  (3.0K)
       1064  125044320     2  freebsd-zfs  (60G)
  125045384          7        - free -  (3.5K)

=>         40  15628053088  ada3  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada4  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada5  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada6  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada8  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)
=>         40  15628053088  ada7  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)


I've also tried running the zpool online -e command:
Code:
        zpool                                           ONLINE       0     0 0
          raidz2-0                                      ONLINE       0     0 0
            gptid/a65fbd25-fdeb-11eb-9c04-a8a15912db53  ONLINE       0     0 0
            gptid/d40638b0-0112-11ec-bd21-d45d649cd60a  ONLINE       0     0 0
            gptid/695e5317-00bc-11ec-9c04-a8a15912db53  ONLINE       0     0 0
            gptid/ea245254-0066-11ec-9c04-a8a15912db53  ONLINE       0     0 0
          raidz2-1                                      ONLINE       0     0 0
            gptid/9197179a-fd52-11eb-a9b8-a8a15912db53  ONLINE       0     0 0
            gptid/b1873158-0314-11ec-8b80-d45d649cd60a  ONLINE       0     0 0
            gptid/92949068-021c-11ec-8710-d45d649cd60a  ONLINE       0     0 0
            gptid/0cdcc236-02a5-11ec-b680-d45d649cd60a  ONLINE       0     0 0

root@freenas:~ # zpool online -e zpool gptid/a65fbd25-fdeb-11eb-9c04-a8a15912db53
root@freenas:~ # zpool online -e zpool gptid/d40638b0-0112-11ec-bd21-d45d649cd60a
root@freenas:~ # zpool online -e zpool gptid/695e5317-00bc-11ec-9c04-a8a15912db53
root@freenas:~ # zpool online -e zpool gptid/ea245254-0066-11ec-9c04-a8a15912db53

root@freenas:~ # zpool online -e zpool gptid/9197179a-fd52-11eb-a9b8-a8a15912db53
root@freenas:~ # zpool online -e zpool gptid/b1873158-0314-11ec-8b80-d45d649cd60a
root@freenas:~ # zpool online -e zpool gptid/92949068-021c-11ec-8710-d45d649cd60a
root@freenas:~ # zpool online -e zpool gptid/0cdcc236-02a5-11ec-b680-d45d649cd60a

root@freenas:~ # zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
freenas-boot  59.5G  16.4G  43.1G        -         -      -    27%  1.00x    ONLINE  -
zpool           58T  25.7T  32.3T        -         -    10%    44%  1.00x    ONLINE  /mnt



Still shows the space as free instead of part of the pool.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
you have 32TB "free" in the pool... which shows as 58TB in total.

I don't see any problem here since you can't have had that much total or free before starting the swaps (since 4x8 is only 32TB, minus the parity).

EDIT: OK, I see you're complaining about the Dashboard widget for the pool showing free space as lower than the real number.

Possibly a shift-refresh of the dashboard page in the browser will fix that, if not, a reboot.
 

chronowraith

Cadet
Joined
Feb 18, 2018
Messages
4
Ok, sounds like maybe I misunderstood what is happening/what the zpool list output is telling me? Per your comment it sounds like I've increased the pool size successfully. Maybe the right question then is why does everything (e.g dataset, dashboard widget) seem to show my capacity is ~27TB rather than the full pool size? My understanding is that unless you have a quota (I don't) it should expand to fill the available pool space. Would I need to reserve it to have the system properly tell me my full capacity? I don't see any capacity currently reserved for the dataset so why would the existing info be the max of my original disks?

As a side note a force refresh and reboot of the server didn't change anything.

Sorry if these are simple, I can't seem to reconcile what I've read in the docs/other threads and what I'm seeing. Might just be my unfamiliarity with ZFS showing through.

pools.jpg


quota.jpg
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
was previously using 8x4TB hdds in a RAIDZ2 configuration.

This doesn't actually appear to be true. It appears you had 2 x 4 x 4TB drives in RAIDZ2.

Ie two RaidZ2 vdevs, each with 4 drives. For a total capacity of about 50%.

Did you actually want a single 8-way RAIDZ2? Because that would give you 75% capacity of the drives, rather than 50%.

You're in a bit of a pickle now if that's what you wanted :-\

Although, I think if what you want is an 8-way RAIDZ2 made out of your 8TB drives... you could work out a way to shift data around, presuming you still have the other drives.

How many sata ports do you have free?
 

chronowraith

Cadet
Joined
Feb 18, 2018
Messages
4
Ahh, ok so this is because of my 2 vdev setup. I feel like I probably looked into this briefly when I built this 6 years ago but it's been mostly fire-and-forget since then until I was looking at upgrading the storage capacity.

Taking this into account I was able to find a storage calculator that takes into account number of raid groups and it does look like the numbers I'm seeing now are the increased amount (I guess I just didn't pay enough attention to my original storage capacity that was usable as that would have tipped me off; as it was I didn't start really looking into it until I didn't see the capacity I was expecting which started with flawed expectations).

Alright, thanks for helping clear this up! I'm not looking to change my storage configuration at this time but I was clearly confused about a few aspects of how much space I was getting with this upgrade. It's still more than enough to keep me going for awhile.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Basically, you use n drives with RaidzN for parity for each vdev

so, 2 out of 4.
 
Top