Resize zfs

johnharet

Cadet
Joined
Nov 25, 2019
Messages
3
Hi. I know, that this theme is repeating a lot, but it is creating some trouble for me.
I created hardware raid 5 with 4 4tb drives and 2 3tb drives, and get zpool size 16tb. After a few month, I bought and changed 2 3tb to 2 4tb drives. And now I see this
Code:
root@freenas:/dev/zvol/main/srvfiles #  zpool list main
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
main  18.2T  15.4T  2.82T         -    34%    84%  1.00x  ONLINE  /mnt

but my zfs dataset doesn't resize
Code:
root@freenas:/dev/zvol/main/srvfiles # zfs list -t all -o space -r main
NAME                 AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
main                  945G  16.7T         0   11.8T              0      4.93T
main/backup_vm        945G   891G         0    891G              0          0
main/jails            945G    88K         0     88K              0          0
main/srvfiles         945G  4.06T         0     88K              0      4.06T
main/srvfiles/iscsi  2.25T  4.06T         0   2.73T          1.33T          0

where is my broke?
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
I am not an expert. I believe you should see the increase in storage once the new drives get resilvered. Depending upon how full your prior drives were could affect how long it would take, as well as how much memory your system has because ZFS loves RAM and will use as much as it can to make things like a resilver complete more quickly.

Also, I do not know ordinary RAID terms, and I am unsure what a raid 5 means in your situation. I hope that you are NOT using a HDD controller card which has its own raid/cache system and definitely not with the raid/cache enabled.

If there is some other issue, I wish you success in its resolution.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Just want to mention first that hardware raid is not recommended for a variety of reasons. You might want to look into that. Guarantee you someone will mention that!

Your dataset shouldn't need to be resized. It will use up as much space as it needs to in order to hold the files it contains unless you put quotas on it or your pools runs out of space. As long as your replacement disks are done resilvering, you should see the full capacity. You are showing that your total space is 18.2T, and 16.7T is used to contain the data. You are likely losing the 1.5 for parity and other things that take up space in zfs. Are you expecting the total size to be larger than 18.2T?

I'm not sure how your vdevs (or hardware raid) are configured, but if you had a RAIDZ1 vdev of 4 4TB drives and a mirror vdev of 2 4TB drives combined into a pool, I think that gets you about 18TB.

For example, I have 2 RAIDZ2 vdevs of 8 2TB drives each, and this is what mine looks like:

Code:
root@nas:~ # zpool list Main
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
Main    29T  16.3T  12.7T        -         -     5%    56%  1.00x  ONLINE  /mnt

root@nas:~ # zfs list -t all -o space -r Main
NAME                                                   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
Main                                                   8.36T  11.6T


If still not making sense, you might want to post the exact config of your vdevs. It's not clear if you are combining all disks together into one vdev (or array) or using multiple ones.
 

johnharet

Cadet
Joined
Nov 25, 2019
Messages
3
I know, that hardware raid is't great thing. This is my drives and raid configuraring.
Code:
root@freenas:/dev/zvol/main/srvfiles # mfiutil show drives
mfi0 Physical Drives:
 9 (  279G) ONLINE <IBM-ESXS VPCA300900EST1 N A3C0 serial=JXVAPDAMVCXSA3C0> SCSI-6 E1:S0
10 (  279G) ONLINE <IBM-ESXS VPCA300900EST1 N A3C0 serial=JXV6U1UMVCXSA3C0> SCSI-6 E1:S1
11 ( 3726G) ONLINE <ST4000VN000-1H41 SC46 serial=Z3049KGC> SATA E1:S3
12 ( 3726G) ONLINE <WDC WD40EFRX-68N 0A82 serial=WD-WCC7K7YK6LCK> SATA E1:S2
13 ( 3726G) ONLINE <WDC WD40EFRX-68W 0A82 serial=WD-WCC4E1UDZHEV> SATA E1:S4
14 ( 3726G) ONLINE <WDC WD40EFRX-68W 0A82 serial=WD-WCC4E3CECT1T> SATA E1:S5
15 ( 3726G) ONLINE <WDC WD40EFRX-68N 0A82 serial=WD-WCC7K1AF991Y> SATA E1:S6
16 ( 3726G) ONLINE <WDC WD40EFRX-68W 0A82 serial=WD-WCC4E1UDZV2L> SATA E1:S7
root@freenas:/dev/zvol/main/srvfiles # mfiutil show volumes
mfi0 Volumes:
  Id     Size    Level   Stripe  State   Cache   Name
 mfid0 (  278G) RAID-1     128K OPTIMAL Disabled
 mfid1 (   18T) RAID-5     128K OPTIMAL Disabled

and about use all size. this is df
Code:
root@freenas:/dev/zvol/main/srvfiles # df -h|grep main
main                                                              13T     12T    922G    93%    /mnt/main
main/backup_vm                                                   1.8T    891G    922G    49%    /mnt/main/backup_vm
main/jails                                                       922G     88K    922G     0%    /mnt/main/jails
main/srvfiles                                                    922G     88K    922G     0%    /mnt/main/srvfiles

I didn't use any quota
Code:
root@freenas:/dev/zvol/main/srvfiles # zfs get quota|grep main
NAME                                                           PROPERTY  VALUE  SOURCE
main                                                           quota     none   default
main/backup_vm                                                 quota     none   local
main/jails                                                     quota     none   default
main/srvfiles                                                  quota     none   local
main/srvfiles/iscsi                                            quota     -      -
root@freenas:/dev/zvol/main/srvfiles # zfs get reservation|grep main
NAME                                                           PROPERTY     VALUE   SOURCE
main                                                           reservation  none    default
main/backup_vm                                                 reservation  none    local
main/jails                                                     reservation  none    default
main/srvfiles                                                  reservation  none    local
main/srvfiles/iscsi                                            reservation  none    default
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You're not going to get a lot of help here while you are using hardware RAID in your system as we know that you're headed for disaster at some point.

That said, you could show us the output of zpool status -v so we can see what's going on with your pool.

Also it will be useful to see if autoexpand is on in the output of zpool get all main
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Just to elaborate on that a bit, when you use hardware RAID, FreeNAS treats your 4 or however many disks as a single VDEV with one disk in it.

In doing that, you are now in a situation where in that VDEV, FreeNAS is keeping the checksums on the disk to know if the data later becomes corrupt, but has no additional copies to work with in order to repair it if corruption is found since only 1 copy exists.

Data integrity is therefore the job of your RAID card. So a scrub for you will potentially find errors, but be unable to do anything about them. If you instead used ZFS software RAID, FreeNAS would have the parity for each file in addition to the checksum, so could repair damaged files found in a scrub.

You will also be unable to move that RAID set to another machine and have FreeNAS pick up those disks (which is another one of the key benefits of using software RAID/ZFS)

Also, and probably most important, in not allowing FreeNAS direct access to the disks, you will not be getting SMART data, so you are relying on your RAID hardware to warn you about impending disk failure or errors. (this is a key feature of FreeNAS that you will be risking your data by avoiding).
 

johnharet

Cadet
Joined
Nov 25, 2019
Messages
3
Code:
root@freenas:/dev/zvol/main/srvfiles # zpool status -v main
  pool: main
 state: ONLINE
  scan: scrub repaired 0 in 2 days 10:53:43 with 0 errors on Tue Nov 12 10:54:59 2019
config:

        NAME        STATE     READ WRITE CKSUM
        main        ONLINE       0     0     0
          mfid1p2   ONLINE       0     0     0

errors: No known data errors
root@freenas:/dev/zvol/main/srvfiles # zpool get all main
NAME  PROPERTY                       VALUE                          SOURCE
main  size                           18.2T                          -
main  capacity                       84%                            -
main  altroot                        /mnt                           local
main  health                         ONLINE                         -
main  guid                           12276687153596358026           default
main  version                        -                              default
main  bootfs                         -                              default
main  delegation                     on                             default
main  autoreplace                    off                            default
main  cachefile                      /data/zfs/zpool.cache          local
main  failmode                       continue                       local
main  listsnapshots                  off                            default
main  autoexpand                     on                             local
main  dedupditto                     0                              default
main  dedupratio                     1.00x                          -
main  free                           2.79T                          -
main  allocated                      15.4T                          -
main  readonly                       off                            -
main  comment                        -                              default
main  expandsize                     -                              -
main  freeing                        0                              default
main  fragmentation                  34%                            -
main  leaked                         0                              default
main  feature@async_destroy          enabled                        local
main  feature@empty_bpobj            active                         local
main  feature@lz4_compress           active                         local
main  feature@multi_vdev_crash_dump  enabled                        local
main  feature@spacemap_histogram     active                         local
main  feature@enabled_txg            active                         local
main  feature@hole_birth             active                         local
main  feature@extensible_dataset     enabled                        local
main  feature@embedded_data          active                         local
main  feature@bookmarks              enabled                        local
main  feature@filesystem_limits      enabled                        local
main  feature@large_blocks           enabled                        local
main  feature@sha512                 enabled                        local
main  feature@skein                  enabled                        local
root@freenas:/dev/zvol/main/srvfiles # zfs get all main
NAME  PROPERTY              VALUE                  SOURCE
main  type                  filesystem             -
main  creation              Mon Jul 16 16:16 2018  -
main  used                  16.7T                  -
main  available             911G                   -
main  referenced            11.8T                  -
main  compressratio         1.74x                  -
main  mounted               yes                    -
main  quota                 none                   default
main  reservation           none                   default
main  recordsize            128K                   default
main  mountpoint            /mnt/main              default
main  sharenfs              off                    default
main  checksum              on                     default
main  compression           lz4                    local
main  atime                 on                     default
main  devices               on                     default
main  exec                  on                     default
main  setuid                on                     default
main  readonly              off                    default
main  jailed                off                    default
main  snapdir               hidden                 default
main  aclmode               passthrough            local
main  aclinherit            passthrough            local
main  canmount              on                     default
main  xattr                 off                    temporary
main  copies                1                      default
main  version               5                      -
main  utf8only              off                    -
main  normalization         none                   -
main  casesensitivity       sensitive              -
main  vscan                 off                    default
main  nbmand                off                    default
main  sharesmb              off                    default
main  refquota              none                   default
main  refreservation        none                   default
main  primarycache          all                    default
main  secondarycache        all                    default
main  usedbysnapshots       0                      -
main  usedbydataset         11.8T                  -
main  usedbychildren        4.93T                  -
main  usedbyrefreservation  0                      -
main  logbias               latency                default
main  dedup                 off                    default
main  mlslabel                                     -
main  sync                  standard               default
main  refcompressratio      1.91x                  -
main  written               11.8T                  -
main  logicalused           26.8T                  -
main  logicalreferenced     22.6T                  -
main  volmode               default                default
main  filesystem_limit      none                   default
main  snapshot_limit        none                   default
main  filesystem_count      none                   default
main  snapshot_count        none                   default
main  redundant_metadata    all                    default
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
So, first you can see that autoexpand is on... as is the default, so no real surprises there.

Second, we confirm that as far as FreeNAS/ZFS is concerned, there is only 1 single disk involved here.

Since FreeNAS/ZFS has made a key foundation assumption that it always has direct access to all of the disks (and you're tricking it here with Hardware RAID to think it has only 1), there is no reason for the programmers to have ever considered the case that is happening here... i.e. a physical disk has just increased in size without being replaced... so something completely impossible if you consider the foundation assumption.

This means there's nothing in the code that will trigger the autoexpand based on a reboot or any other normal operation (normally the resilver process as you replace the last smaller disk would be the cue for it to happen).

As we've already established, you're way off the track here, so I'm only going to point you in the direction of some guesses here as I'm not keen to do the homework to test any of this as I have no intention of ending up in this situation:

I know that some of the resilver process shares code with the scrub process, so maybe a scrub can somehow trigger the expand.

Clearly you can't resilver anything as you only have 1 drive in the 1 vdev, so that's out of the question.

You could just copy off all your data, recreate the pool after wiping the disk and copy the data back. (or even use that as an opportunity to re-work to use software RAID at the same time... I highly recommend this option)

You could also look into some cases where folks have used gpart or other tools to edit the partitions on the disk manually... this would be way out of support and in my eyes extremely risky, so for you to decide based on how much you love your data and hate the other options.

I also did a tiny bit of googling out of curiosity and found that maybe this can do it... zpool online -e poolname (I haven't tested it and found it here https://forums.freebsd.org/threads/zfs-pool-moved-to-bigger-drives-won't-expand.28703/ no idea if the conditions match yours and if you need to first offline the pool to use that command on it. (again you're way off the reservation here, so experimentation and risk is all up to you)
 
Top