Pool full, but dataset is not

jett1

Cadet
Joined
Nov 15, 2023
Messages
3
Hello,

I cannot transfer more files to my SMB share. I ran a test with my MacBook and Windows 11, both say there is not enough room to transfer a 40GB directory into a directory in the SMB share through Finder and File Explorer respectively. The share is located at /mnt/data/samba/share, and has been working fine for months.

This picture shows the TrueNAS Pools screen, at 100% used. But the dataset with the SMB share is only at 16.43 TiB of the available 25.32 TiB. What happened to the remaining 8.88TiB? I have deleted all previous snapshots of data/samba, but that only freed up around 30 GB IIRC.

Screenshot from 2023-11-15 14-07-56.png


Here is some zfs and zpool output, but after searching this forum and the manpages I still don't know why this is happening. Any ideas or pointers?

Code:
zfs get used,logicalused,written,usedbychildren,usedbydataset data
NAME  PROPERTY        VALUE      SOURCE
data  used            25.3T      -
data  logicalused     25.8T      -
data  written         8.88T      -
data  usedbychildren  16.4T      -
data  usedbydataset   8.88T      -


Code:
zfs get used,logicalused,written,usedbychildren,usedbydataset data/samba
NAME        PROPERTY        VALUE      SOURCE
data/samba  used            16.4T      -
data/samba  logicalused     16.8T      -
data/samba  written         16.4T      -
data/samba  usedbychildren  0B         -
data/samba  usedbydataset   16.4T      -


Code:
zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool   206G  3.03G   203G        -         -     0%     1%  1.00x    ONLINE  -
data       25.5T  25.3T   144G        -         -     9%    99%  1.00x    ONLINE  /mnt
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Here is some zfs and zpool output
That's a terrible format, you should use zfs list -o space.

In any case...
What happened to the remaining 8.88TiB?
The answer is fairly clear in the GUI and explicitly stated by the usedbydataset property: It's being used in the root dataset, data. What is actually in there is another question entirely...
 

jett1

Cadet
Joined
Nov 15, 2023
Messages
3
Sorry about that, here's that new output.
Code:
> zfs list -o space
NAME                                                   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
boot-pool                                               197G  3.03G        0B     96K             0B      3.03G
boot-pool/ROOT                                          197G  3.02G        0B     96K             0B      3.02G
boot-pool/ROOT/13.0-U5.3                                197G  3.02G     1.51G   1.51G             0B         0B
boot-pool/ROOT/2023-06-27-boot                          197G     8K        0B      8K             0B         0B
boot-pool/ROOT/Initial-Install                          197G     8K        0B      8K             0B         0B
boot-pool/ROOT/default                                  197G   324K        0B    324K             0B         0B
data                                                   15.5G  25.3T        0B   8.88T             0B      16.4T
data/.system                                           15.5G  52.3M        0B   2.65M             0B      49.6M
data/.system/configs-bd26c8fd36fd4618a75dffa52c04f828  15.5G  11.8M        0B   11.8M             0B         0B
data/.system/cores                                     1024M    96K        0B     96K             0B         0B
data/.system/rrd-bd26c8fd36fd4618a75dffa52c04f828      15.5G  33.7M        0B   33.7M             0B         0B
data/.system/samba4                                    15.5G   764K      172K    592K             0B         0B
data/.system/services                                  15.5G    96K        0B     96K             0B         0B
data/.system/syslog-bd26c8fd36fd4618a75dffa52c04f828   15.5G  3.12M        0B   3.12M             0B         0B
data/.system/webui                                     15.5G    96K        0B     96K             0B         0B
data/iocage                                            15.5G  2.84G        0B   8.24M             0B      2.84G
data/iocage/download                                   15.5G   691M        0B     96K             0B       691M
data/iocage/download/13.1-RELEASE                      15.5G   435M        0B    435M             0B         0B
data/iocage/download/13.2-RELEASE                      15.5G   256M        0B    256M             0B         0B
data/iocage/images                                     15.5G    96K        0B     96K             0B         0B
data/iocage/jails                                      15.5G    96K        0B     96K             0B         0B
data/iocage/log                                        15.5G    96K        0B     96K             0B         0B
data/iocage/releases                                   15.5G  2.16G        0B     96K             0B      2.16G
data/iocage/releases/13.1-RELEASE                      15.5G  1.51G        0B     96K             0B      1.51G
data/iocage/releases/13.1-RELEASE/root                 15.5G  1.51G        0B   1.51G             0B         0B
data/iocage/releases/13.2-RELEASE                      15.5G   668M        0B     96K             0B       668M
data/iocage/releases/13.2-RELEASE/root                 15.5G   668M        0B    668M             0B         0B
data/iocage/templates                                  15.5G    96K        0B     96K             0B         0B
data/samba                                             15.5G  16.4T        0B   16.4T             0B         0B


Here's my system information as well:
Version: TrueNAS-13.0-U5.3
Dell PowerEdge T340
CPU:
Intel(R) Xeon(R) E-2234 CPU @ 3.60GHz 8 threads
Memory: 31.8GiB
Storage: 8x Samsung 4TB 870 QVO SSDs in Raid 5
Boot: 2x 250GB NVMe in Raid 1
Raid card: DELL PERC H730P Adapter
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Woah, are you using hardware RAID? That's a major no-no and a good way of killing performance and risking loss of data.
 

jett1

Cadet
Joined
Nov 15, 2023
Messages
3
Yes. I was brand new to TrueNAS and inherited the system, just recently found out hardware RAID isn't how it's supposed to work. I'm planning to migrate to a new system soon and handle this then.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Did you rule out snapshots?
Yes, usedsnap is zero. Whatever the data is, it's in the root dataset and it's still exposed to the OS.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I would first remove the hardware RAID issue before fiddling with the pool... do we know its ZFS layout?

@jett1 please read the following resources:
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I wouldn't. Removing the hardware RAID will probably (almost certainly) make the pool inaccessible - depending on how the hardware raid is configured.
1. Find out whats at /mnt/Data and deal with it as appropriate
2. Backup pool to somewhere else
3. Deal with RAID issues
4. Restore Pool
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I wouldn't. Removing the hardware RAID will probably (almost certainly) make the pool inaccessible - depending on how the hardware raid is configured.
1. Find out whats at /mnt/Data and deal with it as appropriate
2. Backup pool to somewhere else
3. Deal with RAID issues
4. Restore Pool
I was thinking more of doing the backup first, taking out the hardware RAID, then resolving the space issue.

Moving the data to a temporary zfs pool bigger than their current pool would allow them to fix the space issue directly in order not to suffer from crawling performance while deleting files. Then would then be able to shift the now smaller data to the original (but not hardware RAID anymore) pool.
 
Top