I'm being affected quite badly by this known FreeBSD ZFS tracked issue which is related to statfs + this issue + this, when using my FreeNAS box. Unfortunately I'm not clear about what must be avoided, to avoid triggering the issue, or how to identify existing snapshots or nested datasets that must be renamed as a workaround.
This is the issue where
The symptoms show in two ways.
At the moment it's difficult/impossible to use
Note - I've tried to narrow down the specific snapshot(s) which it can't traverse using both
This is the issue where
fts_read
returns "no such file or dir" when one tries to ls -1R
or find
in a directory that is (or contains somewhere in its hierarchy) a .zfs/snapshot
directory. The second link says that it's due to the 63-or-88 byte statfs struct limit as mentioned in the user guide (1.2) which affects the mounting of snapshots, but there isn't much detail and it isn't clear in this context exactly what the relevant "Filesystem paths" will be, when a pool contains snapshots of a nested dataset (or indeed snapshots of a root dataset that contains nested datasets), so I can't fix it on my NAS.The symptoms show in two ways.
find
andls
error out when searching files+dirs in my NAS via console. I'm not clear about the exact criteria triggering the issue, so I can't tell which snapshots to delete or rename to fix it.
- I can't be sure whether snapshots in a nested dataset hierarchy make the issue worse. What I mean is, suppose I create a pool comprising of a top level dataset containing 3 or 4 levels of nested datasets, like this: production_aberdeen\ public_files\ current\ marketing\ [contains only ordinary files+dirs]. (Because we might need different zfs settings on the marketing dataset compared to its parents). Snapshots of production_aberdeen will imply snapshots of /marketing, or /marketing might have snapshots enabled in its own right. Judging by the output of
zfs list
, the existence of nested datasets seems to imply nested .zfs/snapshot directories. If I have nested datasets and then usefind
orls -1R
, I'm guessing that at some point they will need to mount a.zfs/snapshot
, and might hit a path longer than the 63-or-88 byte issue and error out? I can't be sure.
At the moment it's difficult/impossible to use
find
or ls
properly to search my pool in console. So my questions are:- Does FreeBSD 11.1 in the upcoming FreeNAS 11.1 fix this, as the above bug report suggests, or is it only a part-fix? (Or does resolution require full ino64 (including 1024 byte statfs paths) and it'll only be resolved when FreeBSD 12 lands in FreeNAS during/after 2019?)
- What exactly are the criteria for nested dataset names/paths, and their snapshot names to trigger this issue, which I must avoid on my NAS if I want
find/ls
to always work, or which I can use to find datasets + manual snapshots need renaming because they trigger it?
Note - I've tried to narrow down the specific snapshot(s) which it can't traverse using both
truss find...
and a nested find . -maxdepth 1 -exec find {} ... \;
but I can't see from either of them, a specific snapshot that's a problem. Perhaps I'm misinterpreting the truss
output, it's possible, but it seems to say that the path lengths were okay..okay..okay..then it fails but the last path shown was the same length as previous path lengths that didn't fail.
Last edited: