Dashboard reports wrong used disk space

mcharest

Cadet
Joined
Jan 10, 2022
Messages
2
Hello,

Dashboard reports 93% used space. But that does not make sense to me

1641860873724.png


zpoule is 8 x 800G behind a LSI 3108 controller which we have setup with hardware RAID5 raid ( no HBA no IT Mode available), so it's seen as one one big 6.11 TB disk, yeah I know... Where are the 93% and Available Spacre 399G Comming from.

1641861220315.png



Yet:
1641860930841.png




Output of zfs list
1641861156528.png


1641861181213.png


No snap shots:
1641861741883.png

root@poule[~]# zpool get all
NAME PROPERTY VALUE SOURCE
... remove boot-poool stuff
zpoule size 6.09T -
zpoule capacity 15% -
zpoule altroot /mnt local
zpoule health ONLINE -
zpoule guid 17453388011588474441 -
zpoule version - default
zpoule bootfs - default
zpoule delegation on default
zpoule autoreplace off default
zpoule cachefile /data/zfs/zpool.cache local
zpoule failmode continue local
zpoule listsnapshots off default
zpoule autoexpand on local
zpoule dedupratio 1.00x -
zpoule free 5.13T -
zpoule allocated 988G -
zpoule readonly off -
zpoule ashift 12 local
zpoule comment - default
zpoule expandsize - -
zpoule freeing 0 -
zpoule fragmentation 1% -
zpoule leaked 0 -
zpoule multihost off default
zpoule checkpoint - -
zpoule load_guid 6712221680262568611 -
zpoule autotrim off default
zpoule feature@async_destroy enabled local
zpoule feature@empty_bpobj active local
zpoule feature@lz4_compress active local
zpoule feature@multi_vdev_crash_dump enabled local
zpoule feature@spacemap_histogram active local
zpoule feature@enabled_txg active local
zpoule feature@hole_birth active local
zpoule feature@extensible_dataset active local
zpoule feature@embedded_data active local
zpoule feature@bookmarks enabled local
zpoule feature@filesystem_limits enabled local
zpoule feature@large_blocks enabled local
zpoule feature@large_dnode enabled local
zpoule feature@sha512 enabled local
zpoule feature@skein enabled local
zpoule feature@userobj_accounting active local
zpoule feature@encryption enabled local
zpoule feature@project_quota active local
zpoule feature@device_removal enabled local
zpoule feature@obsolete_counts enabled local
zpoule feature@zpool_checkpoint enabled local
zpoule feature@spacemap_v2 active local
zpoule feature@allocation_classes enabled local
zpoule feature@resilver_defer enabled local
zpoule feature@bookmark_v2 enabled local
zpoule feature@redaction_bookmarks enabled local

root@poule[~]# zfs get all zpoule
NAME PROPERTY VALUE SOURCE
zpoule type filesystem -
zpoule creation Sat Jan 8 16:35 2022 -
zpoule used 4.61T -
zpoule available 1.29T -
zpoule referenced 23.2G -
zpoule compressratio 1.60x -
zpoule mounted yes -
zpoule quota none default
zpoule reservation none default
zpoule recordsize 128K default
zpoule mountpoint /mnt/zpoule default
zpoule sharenfs off default
zpoule checksum on default
zpoule compression lz4 local
zpoule atime off local
zpoule devices on default
zpoule exec on default
zpoule setuid on default
zpoule readonly off default
zpoule jailed off default
zpoule snapdir hidden default
zpoule aclmode passthrough local
zpoule aclinherit passthrough local
zpoule createtxg 1 -
zpoule canmount on default
zpoule xattr on default
zpoule copies 1 local
zpoule version 5 -
zpoule utf8only off -
zpoule normalization none -
zpoule casesensitivity sensitive -
zpoule vscan off default
zpoule nbmand off default
zpoule sharesmb off default
zpoule refquota none default
zpoule refreservation none default
zpoule guid 9753006541564849519 -
zpoule primarycache all default
zpoule secondarycache all default
zpoule usedbysnapshots 0B -
zpoule usedbydataset 23.2G -
zpoule usedbychildren 4.59T -
zpoule usedbyrefreservation 0B -
zpoule logbias latency default
zpoule objsetid 54 -
zpoule dedup off default
zpoule mlslabel none default
zpoule sync disabled local
zpoule dnodesize legacy default
zpoule refcompressratio 1.29x -
zpoule written 23.2G -
zpoule logicalused 1.55T -
zpoule logicalreferenced 29.8G -
zpoule volmode default default
zpoule filesystem_limit none default
zpoule snapshot_limit none default
zpoule filesystem_count none default
zpoule snapshot_count none default
zpoule snapdev hidden default
zpoule acltype nfsv4 default
zpoule context none default
zpoule fscontext none default
zpoule defcontext none default
zpoule rootcontext none default
zpoule relatime off default
zpoule redundant_metadata all default

Regards,

Mario
 

Attachments

  • 1641861019294.png
    1641861019294.png
    36.3 KB · Views: 141
  • 1641860984562.png
    1641860984562.png
    36.3 KB · Views: 142

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
8 x 800G behind a LSI 3108 controller which we have setup with hardware RAID5 raid ( no HBA no IT Mode available), so it's seen as one one big 6.11 TB disk, yeah I know...
Seems you already understand but don't care that your data isn't safe.


That incorrect space reporting may be a factor of some data/metadata corruption, which you will certainly not be able to correct, but might be able to see with zpool status -v zpoule (Z chicken... hahaha).
 

mcharest

Cadet
Joined
Jan 10, 2022
Messages
2
Seems you already understand but don't care that your data isn't safe.
It's not that I don't care, but I can only care to the degree I'm allowed to given the hardware at hand ;-) But I begged and pleaded, so new hardware is on the way. This is a temporary/experimental setup. The real one will have more serious pool name ;-)

Yeah I came update that post just after I initially installed TrueNas I would not see the controller nor any of disks...
That incorrect space reporting may be a factor of some data/metadata corruption, which you will certainly not be able to correct, but might be able to see with zpool status -v zpoule (Z chicken... hahaha).

1641913318341.png


That being said I think I missed some critical information that I should have inserted in my original post. 30 minutes before I made the post. I created a snapshot. Immediately after I did that, the used size shown in the dashboad went from 78% to 93%, even though the snapshot size was reported at I deleted the snapshot, but it stayed at 93%. I spend may 15 minutes googling around for an explanation, which lead nowhere. I captured the data for this post, created the post and even after all this time it was still showing 93% used.

This morning I checked and it's back to 78% !?! I recreated a snapshot and pouf went to 95% again, yet the snap size is 10 bytes... I deleted the snap shot and this time it went back to 78% ! I'm confused. It's as though the snapshot (it's zvol) is consuming 1T byte of data as reported by the dash board.
1641915372871.png
 

truefriend-cz

Explorer
Joined
Mar 4, 2022
Messages
54
I have this problem... I have no idea about the used and free space... in the server i have 2x6GB HDD RAID1

I have on drive used +-5TB

1660832632528.png
 

RaynorLi

Cadet
Joined
Sep 16, 2022
Messages
2
I have the same issue too, I have assigned 7.8T for the pool, but can only leverage 151G, dont know where is the issue
 
Top