SOLVED Real trouble accessing a nested dataset

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
Hello World,

I've come to you, hoping you might aid me in solving what I've found to be a most puzzling TrueNAS and/or BSD issue, but first I'd like to point out how very grateful I am to all those of you helping to further the development and promotion of all freely available Free/TrueNAS products. I've been an avid user of iX's solutions for years now and my journey has been almost nothing but pleasant and rewarding! I've now used a sizable number of Free- and TrueNAS iterations and have found virtually all of them useful, stable and have never hesitated recommending them to others.

With that sugary treat of the way, the problem I'm in need of help with comes in the form of a nested ZFS dataset I'm having distinct trouble both accessing on the host machine as well as sharing via NFS or SMB. A while back, I had created a nested dataset, meant for storing image files I would copy from other pools and directories within my network. I had hoped to create a comprehensive archive of my personal pictures, elegantly held within the ZFS dataset envelope.

Code:
root@truenas[~]# zfs get all vpool/backup3_ds/pictures_dds
NAME                           PROPERTY                VALUE                               SOURCE
vpool/backup3_ds/pictures_dds  type                    filesystem                          -
vpool/backup3_ds/pictures_dds  creation                Fri Feb 19 23:18 2021               -
vpool/backup3_ds/pictures_dds  used                    628K                                -
vpool/backup3_ds/pictures_dds  available               312G                                -
vpool/backup3_ds/pictures_dds  referenced              267K                                -
vpool/backup3_ds/pictures_dds  compressratio           1.00x                               -
vpool/backup3_ds/pictures_dds  mounted                 yes                                 -
vpool/backup3_ds/pictures_dds  quota                   none                                default
vpool/backup3_ds/pictures_dds  reservation             none                                default
vpool/backup3_ds/pictures_dds  recordsize              32K                                 local
vpool/backup3_ds/pictures_dds  mountpoint              /mnt/vpool/backup3_ds/pictures_dds  default
vpool/backup3_ds/pictures_dds  sharenfs                off                                 default
vpool/backup3_ds/pictures_dds  checksum                on                                  default
vpool/backup3_ds/pictures_dds  compression             lz4                                 inherited from vpool
vpool/backup3_ds/pictures_dds  atime                   on                                  default
vpool/backup3_ds/pictures_dds  devices                 on                                  default
vpool/backup3_ds/pictures_dds  exec                    on                                  default
vpool/backup3_ds/pictures_dds  setuid                  on                                  default
vpool/backup3_ds/pictures_dds  readonly                off                                 default
vpool/backup3_ds/pictures_dds  jailed                  off                                 default
vpool/backup3_ds/pictures_dds  snapdir                 hidden                              default
vpool/backup3_ds/pictures_dds  aclmode                 passthrough                         local
vpool/backup3_ds/pictures_dds  aclinherit              passthrough                         inherited from vpool
vpool/backup3_ds/pictures_dds  createtxg               137090                              -
vpool/backup3_ds/pictures_dds  canmount                on                                  default
vpool/backup3_ds/pictures_dds  xattr                   on                                  default
vpool/backup3_ds/pictures_dds  copies                  1                                   local
vpool/backup3_ds/pictures_dds  version                 5                                   -
vpool/backup3_ds/pictures_dds  utf8only                off                                 -
vpool/backup3_ds/pictures_dds  normalization           none                                -
vpool/backup3_ds/pictures_dds  casesensitivity         sensitive                           -
vpool/backup3_ds/pictures_dds  vscan                   off                                 default
vpool/backup3_ds/pictures_dds  nbmand                  off                                 default
vpool/backup3_ds/pictures_dds  sharesmb                off                                 default
vpool/backup3_ds/pictures_dds  refquota                none                                default
vpool/backup3_ds/pictures_dds  refreservation          none                                default
vpool/backup3_ds/pictures_dds  guid                    7761989804660854182                 -
vpool/backup3_ds/pictures_dds  primarycache            all                                 default
vpool/backup3_ds/pictures_dds  secondarycache          all                                 default
vpool/backup3_ds/pictures_dds  usedbysnapshots         360K                                -
vpool/backup3_ds/pictures_dds  usedbydataset           267K                                -
vpool/backup3_ds/pictures_dds  usedbychildren          0B                                  -
vpool/backup3_ds/pictures_dds  usedbyrefreservation    0B                                  -
vpool/backup3_ds/pictures_dds  logbias                 latency                             default
vpool/backup3_ds/pictures_dds  objsetid                7023                                -
vpool/backup3_ds/pictures_dds  dedup                   off                                 default
vpool/backup3_ds/pictures_dds  mlslabel                none                                default
vpool/backup3_ds/pictures_dds  sync                    standard                            default
vpool/backup3_ds/pictures_dds  dnodesize               legacy                              default
vpool/backup3_ds/pictures_dds  refcompressratio        1.00x                               -
vpool/backup3_ds/pictures_dds  written                 105K                                -
vpool/backup3_ds/pictures_dds  logicalused             192K                                -
vpool/backup3_ds/pictures_dds  logicalreferenced       70K                                 -
vpool/backup3_ds/pictures_dds  volmode                 default                             default
vpool/backup3_ds/pictures_dds  filesystem_limit        none                                default
vpool/backup3_ds/pictures_dds  snapshot_limit          none                                default
vpool/backup3_ds/pictures_dds  filesystem_count        none                                default
vpool/backup3_ds/pictures_dds  snapshot_count          none                                default
vpool/backup3_ds/pictures_dds  snapdev                 hidden                              default
vpool/backup3_ds/pictures_dds  acltype                 nfsv4                               default
vpool/backup3_ds/pictures_dds  context                 none                                default
vpool/backup3_ds/pictures_dds  fscontext               none                                default
vpool/backup3_ds/pictures_dds  defcontext              none                                default
vpool/backup3_ds/pictures_dds  rootcontext             none                                default
vpool/backup3_ds/pictures_dds  relatime                off                                 default
vpool/backup3_ds/pictures_dds  redundant_metadata      all                                 default
vpool/backup3_ds/pictures_dds  overlay                 on                                  default
vpool/backup3_ds/pictures_dds  encryption              aes-256-gcm                         -
vpool/backup3_ds/pictures_dds  keylocation             none                                default
vpool/backup3_ds/pictures_dds  keyformat               hex                                 -
vpool/backup3_ds/pictures_dds  pbkdf2iters             0                                   default
vpool/backup3_ds/pictures_dds  encryptionroot          vpool                               -
vpool/backup3_ds/pictures_dds  keystatus               available                           -
vpool/backup3_ds/pictures_dds  special_small_blocks    0                                   default
vpool/backup3_ds/pictures_dds  org.truenas:managedby   10.15.0.21                          local
vpool/backup3_ds/pictures_dds  org.freebsd.ioc:active  no                                  inherited from vpool
root@truenas[~]#


This nested dataset, so creatively named "pictures_dds", lies within an encrypted pool names "vpool", within a dataset called "backup3_ds", which is one of several such datasets on this pool. Despite "zfs used" showing 628K, I know for a fact that this nested dataset, pictures_dds, contains much, much more data and is mounted on the host system.

Code:
root@truenas[~]# zfs mount | grep pictures_dds
vpool/backup3_ds/pictures_dds   /mnt/vpool/backup3_ds/pictures_dds
root@truenas[~]# ls -la -s -t -G -R /mnt/vpool/backup3_ds/pictures_dds
total 48
36 drwxrwxrwx+ 9 root  wheel  12 Apr 20 15:51 ..
12 drwxrwxrwx+ 2 root  wheel   2 Feb 19 23:18 .
root@truenas[~]# 


Trying to access the data with the root user directly on the host system yields surprisingly little results, despite the permissions seemingly being set correctly:

Code:
root@truenas[~]# getfacl /mnt/vpool/backup3_ds/pictures_dds
# file: /mnt/vpool/backup3_ds/pictures_dds
# owner: root
# group: wheel
            owner@:rwxpDdaARWcCos:fd----I:allow
            group@:rwxpDdaARWcCos:fd----I:allow
         everyone@:rwxpDdaARWcCos:fd----I:allow
         everyone@:--------------:fd----I:allow
root@truenas[~]# 


I'll refrain from adding any more information and making this post any longer; I welcome guidance and ideas from anyone more in the know.

Thank you very much!

These pictures mean the world to me.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
vpool/backup3_ds/pictures_dds referenced 267K
If you're missing content in that dataset (even on the host as you seem to be showing) there's not a lot we can do to show you where that content is (only you can see your server and know where you put things).

It appears to me that only 276K of data is held in that dataset and what you're finding matches with that.

What is absolutely possible is that you have a naming conflict with a directory of the same name in the same place as the dataset which contains all the data.

My first advice would be to use zfs rename to have a look "behind" the dataset and re-do your ls...

zfs rename vpool/backup3_ds/pictures_dds vpool/backup3_ds/temp_pictures_dds

then

ls -la -s -t -G -R /mnt/vpool/backup3_ds/pictures_dds
 

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
If you're missing content in that dataset (even on the host as you seem to be showing) there's not a lot we can do to show you where that content is (only you can see your server and know where you put things).

It appears to me that only 276K of data is held in that dataset and what you're finding matches with that.

What is absolutely possible is that you have a naming conflict with a directory of the same name in the same place as the dataset which contains all the data.

My first advice would be to use zfs rename to have a look "behind" the dataset and re-do your ls...

zfs rename vpool/backup3_ds/pictures_dds vpool/backup3_ds/temp_pictures_dds

then

ls -la -s -t -G -R /mnt/vpool/backup3_ds/pictures_dds
Thank you for your reply. What you've outlined is a bit new to me so I'm pursuing a few other routes before giving it a try.

Here's some more information for anyone willing to come to my aid:
I've tried to recreate the steps I took all the way back in February, when I was creating the dataset that is the apex of my problems. I've come to think I must have shared and modified said nested dataset via SMB at the time, while I might have also accessed the host dataset via NFS.

Since I've been operating off a highly customized Alpine Linux installation that was, frankly, meant for something completely different and which is lacking most GNU components, I instead switched to a recent OpenSuse 15.3 Leap VM I had put on for testing. Luckily, said distro comes with comprehensive CIFS as well as NFS client support. Trying to leverage said support, I recreated the shares I think I used in February but to no avail.

Code:
lias@localhost:~> sudo !!                                                                                                    
sudo mount -t nfs4 10.15.0.20:/mnt/vpool/backup3_ds /home/lias/nfs/backup3_ds                          
[sudo] password for root:                                                                                                    
lias@localhost:~>

Despite having no trouble mounting the mother dataset, backup3_ds, the daughter dataset, pictures_dds, remained inaccessible. That was somewhat expected. Creating a share for the daughter dataset and mounting it directly yielded no result:

Code:
lias@localhost:~> mkdir /home/lias/nfs/pictures_dds                                                                           
lias@localhost:~> sudo mount -t nfs4 10.15.0.20:/mnt/vpool/backu3_ds/pictures_dds /home/lias/nfs/pictures_dds/                
Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service.     
mount.nfs4: mounting 10.15.0.20:/mnt/vpool/backu3_ds/pictures_dds failed, reason given by server: No such file or directory   
lias@localhost:~>


While creating the NFS/CIFS shares, I clearly recalled creating a "Multi-protocol (NFSv3/SMB)" CIFS share for accessing the daughter dataset, pictures_dds. I also clearly remember having to enter (at least a password) to access the share. The data under pictures_dds was owned by user "lias", which happens to be the same user I use on OpenSUSE. I also checked my mounts folder in the user lias' backup just to make sure I didn't accidentaly save the data locally but no such luck.

What I find most puzzling about the whole matter is the fact that I was enable to access or even list the dataset in question, pictures_dds, with any user on the TrueNAS host machine, even when the data was still accessible remotely. I've never before or since had a folder or dataset which could be inaccessible by the host and yet be available to network clients.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
NFS treats datasets as a boundary, so I'm not surprised if you can't follow the path into a child dataset from a NFS client.

Based on that, if you can't see the child dataset at all, perhaps you created a directory with the same name (which is what I suggested already) in the same location which is blocking your ability to see it.
 

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
I've gone through most of the tools in my toolbox but have gotten nowhere. I'm ready to try what you suggested in your Monday's post, sretalla. The only reason I postponed doing so was because I don't know how what you suggested might have interfered with the quasi-solutions I just attempted. Now, that I no longer care about those type of consequences, I'm eager to try what you described. I'll let you know how it goes. With that said...

I'm becoming ever more concerned (and angry with myself) for allowing things to develop this far and not taking fewer chances with my beloved data. Since accessing the problem daughter dataset (in my case: /mnt/vpool/backup3_ds/pictures_dds) through the host machine (say SSH or the web UI shell) returns nothing but a seemingly empty folder, and serving said daughter dataset (pictures_dds) or it's hierarchical mother (backup3_ds) via NFS or CIFS returns the same, I'm growing increasingly concerned the dataset (pictures_dds) might indeed be empty and the countless pictures I supposedly saved to it somewhere else completely (or, scarier still, nowhere at all).

I've already outlined these pictures mean the world to me and since I'd rather lose a kidney than these memories, I'll have to - should "sretalla's" kind suggestion fail - make sure there's actual data in pictures_dds before doing any further steps. Wish me luck; else I might be spending months and mighty bucks recovering this data.

Cheers!
 
Last edited:

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
If you're missing content in that dataset (even on the host as you seem to be showing) there's not a lot we can do to show you where that content is (only you can see your server and know where you put things).

It appears to me that only 276K of data is held in that dataset and what you're finding matches with that.

What is absolutely possible is that you have a naming conflict with a directory of the same name in the same place as the dataset which contains all the data.

My first advice would be to use zfs rename to have a look "behind" the dataset and re-do your ls...

zfs rename vpool/backup3_ds/pictures_dds vpool/backup3_ds/temp_pictures_dds

then

ls -la -s -t -G -R /mnt/vpool/backup3_ds/pictures_dds
NFS treats datasets as a boundary, so I'm not surprised if you can't follow the path into a child dataset from a NFS client.

Based on that, if you can't see the child dataset at all, perhaps you created a directory with the same name (which is what I suggested already) in the same location which is blocking your ability to see it.

After rereading your post a number of times, It finally dawned on my that all you're suggesting was that I rename the daughter dataset to uncover a potential naming conflict. After ruminating on your words some more I came to see how the error your suspected might have come about. I finally tried what you suggested and I guess you, as well as the community at large, deserve to know how it went.

Warning: There is going to be some rather strong language used in the Show below, please don't open it if you're not comfortable with that. Also, CAPS warning.
SWEET BABY JESUS, THIS JUST MIGHT BE THE HAPPIEST I'VE EVER BEEN! YOU SIR ARE A G0DD4MN GENIUS, MIRACLE WORKER, NOBLE PEACE PRIZE WINNER, VOODOO MASTER, GIANT WHISPERER! HOLY JUMPING $H1TB4LLS, I GUESS I'M KEEPIN' THAT FUKKIN KIDNEY AFTER ALL!

It worked. And I'm forever grateful to you, sretalla.
I actually started dancing.

This might be a bit much, but by God do you deserve it.
Bless you. <3

Addendum: Should anyone want to know, the error most likely arose after I made a syntax blunder using "cp" in BSD or Linux, where, instead of copying the desired data to 'pictures_dds/', I accidentally created a new folder, also named pictures_dds, and copied the data to /mnt/vpool/backup3_ds/pictures_dds/pictures_dds, producing the issue.
 
Last edited:

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
Code:
root@truenas[~]# du -hc /mnt/vpool/backup3_ds/pictures_dds
...
584G    /mnt/vpool/backup3_ds/pictures_dds
584G    total
root@truenas[~]# 


That's more like it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes, I had a similar issue recently, where I NFS mounted a parent dataset and copied data into what I thought was a child dataset... only to find the child dataset was empty... but the parent showed usage.

Unmounting the child dataset on the truenas allowed me to "see" the contents in a sub directory of the parent dataset.

My conclusion is not to mount parent datasets via NFS, instead mount the actual child datasets. The datasets being used not for organization, but instead as "data sets" to enable efficient backup/replication.
 

zfs get

Cadet
Joined
Aug 2, 2021
Messages
8
My conclusion is not to mount parent datasets via NFS, instead mount the actual child datasets. The datasets being used not for organization, but instead as "data sets" to enable efficient backup/replication.

I agree wholeheartedly. The road to hell is paved with the best intentions and, similarly my problems slowly arose from a desire to use more of ZFS' features better and trying to adapt to quickly changing data profile (a.k.a. bad planning).

I'm not sure what convention is here but I feel it appropriate I mark the issue "SOLVED" in the thread title.

While I got ya'll here though, I do have another unrelated question, but first a little intro: I've recently put it on myself to sift through roughly 25 HDDs I had lying around and on which I had random data. Despite knowing better, the backup practices I've led have been horrible. There's different drives (SAS, SATA), filesystems and a whole lot of data duplication or overlap. To try and remedy the situation somewhat, I've resorted to treating my data with "rmlint" and writing a custom data sorting script. What I'd like to know now is the following:

  • What is the supposed best RADIZ configuration for speedy & safe (scripted) data organizing? Should I add an Optane metadata device?
  • "rmlint" is not too fond of FreeBSD; I've tried compiling it on my own only to falter and return to Linux. Is it OK to adapt my script to Linux or would it be better to run such operations on the host machine, say in a jail? Finessing the data through network mounts doesn't seem like the most elegant solution.
  • I can't be the first person trying to sort and organize data on True/FreeNAS, are there any freely available scripts or applications I'm missing? (What I have in mind here would be, for example, a plugin capable of, for example, reading even the EXIF data of pictures and sorting them accordingly.)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Should I add an Optane metadata device?
Optane is well suited to SLOG, but not so much for a special/metadata VDEV.

You will need to match your pool redundancy level with the metadata VDEV as it becomes pool integral and can cause pool loss if it fails. It also can't be removed once added (unless the pool is mirrors, not RAIDZ).

Good SSDs would be a candidate for the job here rather than optane.

would it be better to run such operations on the host machine, say in a jail?
Probably "best", but there are many roads to take, all of which may get you to the result.

I can't be the first person trying to sort and organize data on True/FreeNAS, are there any freely available scripts or applications I'm missing?
There are many...

There was some work done on photoprism (although I'm not sure if it's currently compiling properly due to dependency). https://www.truenas.com/community/threads/how-to-install-photoprism-in-jail.88862/

There was a recently developed unofficial plugin: https://www.truenas.com/community/threads/piwigo-plugin-for-truenas-ready-to-test.91146/


@danb35 has done work on/found a couple of others
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
As @sretalla says
Optane is a perfect SLOG. Its expensive for its size - but you only need to hold 5 or so seconds of maximum writes (I think I used about 30GB which is more than enough by a long way). It doesn't lose data when the power goes out and its got immense write endurance. Under a steady state the data written there is never used - its there just in case. Great for NFS & iSCSI.
For metadata/special you need much larger solid state drives - optanes would be very expensive for this. Some of the Intel DC S3610 (high write capacity) drives would work. These will need to be 100's of GB up to TB's depending on your metadataload and how much you want to push into these drives

I use an Optane P900 as SLOG and L2ARC (manually partitioned) and mirrored DC S3610 (800GB) for a special vdev on a pool of HDD's. I got all of them 2nd hand from ebay - virtually unused. Probably overkill for my use case - but I like to play
 
Top