Replaced 3TB drives with 4TB, will not "kind of" not autoexpand

hadmanysons

Cadet
Joined
Jan 15, 2019
Messages
6
I'm not sure what's going on, but some places report the old size of the pool and some the new. I started by creating this thread on Reddit and got no help there: https://old.reddit.com/r/freenas/comments/af92uq/autoexpand_kind_of_not_working_after_replacing/

Basically, the dashboard shows the correct amount that should be free now, 10TB, but if I go into Storage->Pools, it only shows 7TB free (old drives value). Additionally, on my windows share (Samba) it only shows 7TB as well.

Dashboard.png

share drive.png

Storage Pool.png



Furthermore, "zpool list" shows 10TB free, for a total of 14.5TB availabe. Output of "geom label list" shows that all 4 drives are reporting as 3.6T. I'm kind of at a loss here.

zpool.png


Code:
NAS# geom label list
Geom name: ada0p2
Providers:
1. Name: gptid/57cf38c9-00dc-11e7-8186-6805ca3edb68
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   secoffset: 0
   offset: 0
   seclength: 7809842696
   length: 3998639460352
   index: 0
Consumers:
1. Name: ada0p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada1p2
Providers:
1. Name: gptid/3e6b7630-6ab3-11e8-9b42-6805ca3edb68
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   secoffset: 0
   offset: 0
   seclength: 7809842696
   length: 3998639460352
   index: 0
Consumers:
1. Name: ada1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada2p2
Providers:
1. Name: gptid/f7055b1f-1601-11e9-badb-6805ca3edb68
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   secoffset: 0
   offset: 0
   seclength: 7809842696
   length: 3998639460352
   index: 0
Consumers:
1. Name: ada2p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada3p2
Providers:
1. Name: gptid/d1235125-cd7f-11e6-bafc-6805ca3edb68
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   secoffset: 0
   offset: 0
   seclength: 7809842696
   length: 3998639460352
   index: 0
Consumers:
1. Name: ada3p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: da0p1
Providers:
1. Name: gptid/8ec19678-d3ee-11e6-b47a-6805ca3edb68
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: da0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0

Geom name: da1p1
Providers:
1. Name: gptid/b65ae93d-fde8-11e5-8719-0800274b59b4
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: da1p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0


I've through about 10 of the seemingly relevant results that pop up when I search the forum for "autoexpand", and nothing has worked so far.

I verified autoexpand was on before replacing last drive.

Things I've tried:

Restart the box

set autoexpand off and then on, and restart the box

"zpool online -e MainDrive <guid>" all four hard drives and restarted

Is this a GUI/Samba bug, or did my pool not actually autoexpand?

Thanks in advance. I've tried to post everything that I thought could be relevant to the issue. Been using FreeNAS for about 3 years, although this is my first time on this forum asking about it. And yes, I've RTFM on everything that I thought would be relevant, my apologies in advance if I missed something.
 

hadmanysons

Cadet
Joined
Jan 15, 2019
Messages
6
Code:
NAS# zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
MainPool                                                    3.23T  6.98T  28.8G  /mnt/MainPool
MainPool/.bhyve_containers                                  70.4M  6.98T  70.4M  /mnt/MainPool/.bhyve_containers
MainPool/.system                                             124M  6.98T   151K  legacy
MainPool/.system-09540feb                                    843M  6.98T   843M  /mnt/MainPool/.system-09540feb
MainPool/.system/configs-0dc2ca1e7fa9464d8c4d7c4fd81f6855   76.6M  6.98T  76.6M  legacy
MainPool/.system/cores                                      3.63M  6.98T  3.63M  legacy
MainPool/.system/rrd-0dc2ca1e7fa9464d8c4d7c4fd81f6855       33.9M  6.98T  33.9M  legacy
MainPool/.system/samba4                                      907K  6.98T   907K  legacy
MainPool/.system/syslog-0dc2ca1e7fa9464d8c4d7c4fd81f6855    8.57M  6.98T  8.57M  legacy
MainPool/.system/webui                                       128K  6.98T   128K  legacy
MainPool/.vm_cache                                          45.5M  6.98T   128K  /mnt/MainPool/.vm_cache
MainPool/.vm_cache/boot2docker                              45.4M  6.98T   128K  /mnt/MainPool/.vm_cache/boot2docker
MainPool/.vm_cache/boot2docker/initrd                       41.7M  6.98T  41.7M  /mnt/MainPool/.vm_cache/boot2docker/initrd
MainPool/.vm_cache/boot2docker/vmlinuz64                    3.61M  6.98T  3.61M  /mnt/MainPool/.vm_cache/boot2docker/vmlinuz64
MainPool/NAS                                                3.19T  6.98T  3.19T  /mnt/MainPool/NAS
MainPool/ShareDrive                                          128K  6.98T   128K  /mnt/MainPool/ShareDrive
MainPool/VMs                                                 134K  6.98T   134K  /mnt/MainPool/VMs
MainPool/iocage                                             3.49G  6.98T  3.44M  /mnt/iocage
MainPool/iocage/download                                     260M  6.98T   128K  /mnt/iocage/download
MainPool/iocage/download/11.1-RELEASE                        260M  6.98T   260M  /mnt/iocage/download/11.1-RELEASE
MainPool/iocage/images                                       128K  6.98T   128K  /mnt/iocage/images
MainPool/iocage/jails                                       2.09G  6.98T   128K  /mnt/iocage/jails
MainPool/iocage/jails/plex                                  2.09G  6.98T   238K  /mnt/iocage/jails/plex
MainPool/iocage/jails/plex/root                             2.09G  6.98T  2.89G  /mnt/iocage/jails/plex/root
MainPool/iocage/log                                          134K  6.98T   134K  /mnt/iocage/log
MainPool/iocage/releases                                    1.14G  6.98T   128K  /mnt/iocage/releases
MainPool/iocage/releases/11.1-RELEASE                       1.14G  6.98T   128K  /mnt/iocage/releases/11.1-RELEASE
MainPool/iocage/releases/11.1-RELEASE/root                  1.14G  6.98T  1.12G  /mnt/iocage/releases/11.1-RELEASE/root
MainPool/iocage/templates                                    128K  6.98T   128K  /mnt/iocage/templates
MainPool/jails                                              3.95G  6.98T   174K  /mnt/MainPool/jails
MainPool/jails/.warden-template-pluginjail-10.3-x64          527M  6.98T   518M  none
MainPool/jails/.warden-template-pluginjail-9.3-x64           506M  6.98T   496M  /mnt/MainPool/jails/.warden-template-pluginjail-9.3-x64
MainPool/jails/plexmediaserver_1                            2.94G  6.98T  3.44G  /mnt/MainPool/jails/plexmediaserver_1
MainPool/vm                                                 1.51G  6.98T   128K  /mnt/MainPool/vm
MainPool/vm/docker_host_0                                   1.51G  6.98T   887M  /mnt/MainPool/vm/docker_host_0
MainPool/vm/docker_host_0/docker                             612M  6.98T   612M  /mnt/MainPool/vm/docker_host_0/docker
MainPool/vm/docker_host_0/files                             45.2M  6.98T  45.2M  /mnt/MainPool/vm/docker_host_0/files
freenas-boot                                                7.31G   104G    31K  none
freenas-boot/ROOT                                           7.24G   104G    25K  none
freenas-boot/ROOT/11.2-BETA1                                 212K   104G   880M  /
freenas-boot/ROOT/11.2-BETA2                                 216K   104G   870M  /
freenas-boot/ROOT/11.2-RELEASE-U1                           6.13G   104G   765M  /
freenas-boot/ROOT/9.10-STABLE-201604261518                    55K   104G   463M  /
freenas-boot/ROOT/9.10-STABLE-201605021851                    45K   104G   481M  /
freenas-boot/ROOT/9.10-STABLE-201606270534                    56K   104G   594M  /
freenas-boot/ROOT/9.10.1                                      53K   104G   614M  /
freenas-boot/ROOT/9.10.2                                      48K   104G   636M  /
freenas-boot/ROOT/9.10.2-U1                                   41K   104G   636M  /
freenas-boot/ROOT/9.10.2-U2                                   46K   104G   637M  /
freenas-boot/ROOT/9.10.2-U6                                   57K   104G   639M  /
freenas-boot/ROOT/Corral-RELEASE                            1.11G   104G  1.11G  /
freenas-boot/ROOT/FreeNAS-1ac5f24e172b4785efcab5401aa5507f    97K   104G   456M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201604150515             42K   104G   526M  /
freenas-boot/ROOT/Initial-Install                              1K   104G   513M  legacy
freenas-boot/ROOT/Wizard-2016-04-10_01-40-28                   1K   104G   515M  legacy
freenas-boot/ROOT/default                                     41K   104G   514M  legacy
freenas-boot/grub                                           45.2M   104G  6.34M  legacy
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
MainPool 3.23T 6.98T
This looks right for 4 x 4 TB disks in RAIDZ1. You'll lose one disk to parity, then TB vs. TiB, then filesystem overhead and such. So, total capacity of 12 TB (or 10.8 TiB) before the overhead; your system is showing 10.2 TiB. That looks within normal limits to me.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It was 10TB BEFORE the upgrade though.
Not possible. The most you could have seen with 4 x 3 TB disks in RAIDZ1 would have been 9 TB, or 8.1 TiB, of capacity, and probably a bit less for the same reasons as above.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
But why does Samba only show that there's only 7TB total space?
It doesn't according to your screenshot above--it shows 6.98 TiB free of 10.1 TiB. That's correct.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Then why does "zpool list" show 14.5TB available?
It doesn't. It shows 14.5 TiB total, 10.1 TiB free. It's correct. zpool list doesn't account for parity.
 

hadmanysons

Cadet
Joined
Jan 15, 2019
Messages
6
I think I understand, but what about the fact that none of that changed when I added the last drive?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
what about the fact that none of that changed when I added the last drive?
I'm pretty sure that can't have been the case. Since you don't give any of the "before" output, though, it's pretty hard to say for sure.
 
Top