Huge storage usage/available discrepency

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
FreeNAS-11.3-U1

I have a 28.54T pool configured. 20x 1.64T drives in RAIDZ2. lz4 and no dedup. No substantial snapshots (total usage of snapshots is a couple hundred MB.) The dashboard and zfs list as reporting that there is only 211G available. The biggest culprit is my media directory which both report as being 24.5T. The next largest directory is 1.29T.

However when I drop into command line and run du -hs against /mnt/ds01/media it returns 11T.

What could be causing this? Which do I trust? If zfs list and the dashboard are incorrect, what's the best practice in resolving the discrepency?
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How many snapshots do you have? Also, how is your pool configured? Any flavor of RAIDZx will have some storage reserved for parity.
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Zero snapshots of this pool outside of a few iocage snapshots that total a couple hundred megabytes. The pool is comprised of 20x 1.64T HDDs in RAIDZ2. lz4 compression enabled, no dedup. Usable should be 25.6T total ignoring the 80% guidance and even with that it should be around 20T.

No other directory in this pool has any substantial storage usage. Largest is 1.29 T

I updated the OP to include more information provided by this reply.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Can you provide the output of zpool status ds01, and zfs list ds01, along with du -hs /mnt/ds01?
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Code:
root@nfs01[~]# zpool status ds01
  pool: ds01
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 12:18:17 with 0 errors on Sun May 17 12:18:20 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        ds01                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/0968b081-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0a135890-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0aea2aaa-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0b9b8629-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0c50a1b6-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0d1b42c3-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0dd04374-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0e9cc31b-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/0f855643-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/105ab8d5-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/112788b3-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/11fa69ae-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/12c6c09f-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/13b31712-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/147d7ac2-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/15462e34-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/161e6699-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/170a3b24-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/17e225ec-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
            gptid/18c6d821-2283-11ea-97a1-d4bed9a88b10  ONLINE       0     0     0
        logs
          gptid/a8f21122-312b-11ea-bca3-d4bed9a88b10    ONLINE       0     0     0
        cache
          gptid/f2eba68a-2519-11ea-80bd-d4bed9a88b10    ONLINE       0     0     0
          gptid/f35f9ab7-2519-11ea-80bd-d4bed9a88b10    ONLINE       0     0     0
          gptid/f3d22ede-2519-11ea-80bd-d4bed9a88b10    ONLINE       0     0     0

errors: No known data errors


Code:
root@nfs01[~]# zfs list ds01
NAME   USED  AVAIL  REFER  MOUNTPOINT
ds01  28.5T   284G  1.61T  /mnt/ds01


Code:
root@nfs01[~]# du -hs /mnt/ds01
43T    /mnt/ds01


One extra for good measure since we had a large return from the first run of du.
Code:
root@nfs01[/mnt/ds01]# du -hs /mnt/ds01/*
320G    /mnt/ds01/docker
1.0K    /mnt/ds01/games
487K    /mnt/ds01/games-sleeper
31T    /mnt/ds01/iocage
11T    /mnt/ds01/media
450G    /mnt/ds01/userdata
346G    /mnt/ds01/vmware-nfs


I imagine iocage is tripling up as there are mounts to /mnt/ds01/media within two of the iocages with two additional iocages getting each a subdir from inside /mnt/ds01/media. I ran the following and got the following results, which makes sense.

It still does not explain why ZFS is reporting lack of disk space unless it is incapable of understanding symlinks.
Code:
root@nfs01[~]# du -hs /mnt/ds01/iocage/jails/*
5.5G    /mnt/ds01/iocage/jails/elk
11T    /mnt/ds01/iocage/jails/nextcloud
11T    /mnt/ds01/iocage/jails/plex-plexpass_2
4.4T    /mnt/ds01/iocage/jails/radarr
5.4T    /mnt/ds01/iocage/jails/sonarr
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
One more: zpool list ds01
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Code:
root@nfs01[~]# zpool list ds01
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
ds01  32.5T  30.1T  2.37T        -         -    29%    92%  1.06x  ONLINE  /mnt
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I imagine iocage is tripling up as there are links to /mnt/ds01/media within two of the iocages
What do you mean by "links"? Do you mean that you manually created symbolic links? If so, this is most likely your problem, as I doubt they will work the way you think you will. If you want to give a jail access to data stored outside the jail, the way to do that is to add a mountpoint to the jail through the GUI (or if you really want to do it through the CLI, use iocage fstab -a).

And though this isn't your immediate problem, there are all kinds of problems with your pool setup. Twenty devices in a RAIDZ2 vdev is about ten too many, your SLOG device will slow you down unless you're doing lots of sync writes (which you'd better not be with that pool configuration), and those three L2ARC devices will all consume system RAM to index them--so unless you have at least 64GB of RAM in the system, you'd probably see better performance removing them too.
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Code:
root@nfs01[~]# grep memory /var/run/dmesg.boot
real memory  = 206158430208 (196608 MB)
avail memory = 200309616640 (191030 MB)


Mounts into jails were performed via the management interface.. they are not symlinks.

Where do I go from here to resolve this issue?
 
Last edited:

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Code:
root@nfs01[~]# zfs list
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
ds01                                                   28.5T   228G  1.61T  /mnt/ds01
ds01/.bhyve_containers                                 70.7M   228G  70.7M  /mnt/ds01/.bhyve_containers
ds01/.system                                            839M   228G   642M  legacy
ds01/.system/configs-ae1f6b9a44d94d15a639a40c1e91b328  35.6M   228G  35.6M  legacy
ds01/.system/cores                                     5.27M   228G  5.27M  legacy
ds01/.system/rrd-ae1f6b9a44d94d15a639a40c1e91b328       104M   228G   104M  legacy
ds01/.system/samba4                                     842K   228G   842K  legacy
ds01/.system/syslog-ae1f6b9a44d94d15a639a40c1e91b328   51.4M   228G  51.4M  legacy
ds01/.system/webui                                      235K   228G   235K  legacy
ds01/docker                                             320G   228G   320G  /mnt/ds01/docker
ds01/games-sleeper                                      992K   228G   992K  /mnt/ds01/games-sleeper
ds01/iocage                                             150G   228G  33.0G  /mnt/ds01/iocage
ds01/iocage/download                                    272M   228G   235K  /mnt/ds01/iocage/download
ds01/iocage/download/11.2-RELEASE                       272M   228G   272M  /mnt/ds01/iocage/download/11.2-RELEASE
ds01/iocage/images                                      235K   228G   235K  /mnt/ds01/iocage/images
ds01/iocage/jails                                       115G   228G   256K  /mnt/ds01/iocage/jails
ds01/iocage/jails/elk                                  3.74G   228G   277K  /mnt/ds01/iocage/jails/elk
ds01/iocage/jails/elk/root                             3.74G   228G  5.59G  /mnt/ds01/iocage/jails/elk/root
ds01/iocage/jails/nextcloud                            31.2G   228G   842K  /mnt/ds01/iocage/jails/nextcloud
ds01/iocage/jails/nextcloud/root                       31.1G   228G  32.6G  /mnt/ds01/iocage/jails/nextcloud/root
ds01/iocage/jails/plex-plexpass_2                      77.0G   228G   885K  /mnt/ds01/iocage/jails/plex-plexpass_2
ds01/iocage/jails/plex-plexpass_2/root                 77.0G   228G  78.4G  /mnt/ds01/iocage/jails/plex-plexpass_2/root
ds01/iocage/jails/radarr                               1.66G   228G   267K  /mnt/ds01/iocage/jails/radarr
ds01/iocage/jails/radarr/root                          1.66G   228G  3.51G  /mnt/ds01/iocage/jails/radarr/root
ds01/iocage/jails/sonarr                               1.07G   228G   288K  /mnt/ds01/iocage/jails/sonarr
ds01/iocage/jails/sonarr/root                          1.06G   228G  2.92G  /mnt/ds01/iocage/jails/sonarr/root
ds01/iocage/log                                        46.9M   228G  46.9M  /mnt/ds01/iocage/log
ds01/iocage/releases                                   1.86G   228G   235K  /mnt/ds01/iocage/releases
ds01/iocage/releases/11.2-RELEASE                      1.86G   228G   235K  /mnt/ds01/iocage/releases/11.2-RELEASE
ds01/iocage/releases/11.2-RELEASE/root                 1.86G   228G  1.86G  /mnt/ds01/iocage/releases/11.2-RELEASE/root
ds01/iocage/templates                                   235K   228G   235K  /mnt/ds01/iocage/templates
ds01/media                                             24.6T   228G  24.6T  /mnt/ds01/media
ds01/userdata                                           450G   228G   450G  /mnt/ds01/userdata
ds01/vmware-nfs                                        1.29T   228G   346G  /mnt/ds01/vmware-nfs
ds01/vmware-nfs/APP01-4entnh                            956G  1.15T  5.44G  -
ds01/vmware-nfs/APP02-99d2r                            20.0G   246G  2.10G  -
freenas-boot                                           1.75G   130G    64K  none
freenas-boot/ROOT                                      1.75G   130G    29K  none
freenas-boot/ROOT/11.3-U1                              1.75G   130G  1.00G  /
freenas-boot/ROOT/Initial-Install                         1K   130G   758M  legacy
freenas-boot/ROOT/default                               296K   130G   758M  legacy
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
OK, so you were seeing artifacts of du (not ZFS) not handling the mountpoints properly. Are you sure you don't have snapshots? Try zfs list -t snapshot | grep media.
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Yes, I'm sure.

Code:
root@nfs01[~]# zfs list -t snapshot | grep media
root@nfs01[~]#
 

Yuka

Dabbler
Joined
Dec 22, 2019
Messages
13
Code:
root@nfs01[~]#  zfs list -t snapshot
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
ds01/iocage/releases/11.2-RELEASE/root@plex-plexpass_2  4.44M      -  1.86G  -
ds01/iocage/releases/11.2-RELEASE/root@elk                  0      -  1.86G  -
ds01/iocage/releases/11.2-RELEASE/root@radarr               0      -  1.86G  -
ds01/iocage/releases/11.2-RELEASE/root@sonarr               0      -  1.86G  -
ds01/iocage/releases/11.2-RELEASE/root@nextcloud            0      -  1.86G  -
freenas-boot/ROOT/11.3-U1@2019-12-18-10:50:41           2.22M      -   758M  -
freenas-boot/ROOT/11.3-U1@2020-03-26-03:38:56           2.46M      -   758M  -


I started a scrub last night just for kicks at this point and it should be done in the next 36 minutes. I don't expect it to have any effects. I'm not sure what other information to provide to help drive the diagnosis and resolution of this issue but if there is any more diagnostic information I can provide, I'm willing to do so.
 
Top