Resource icon

ZFS Feature Flags in TrueNAS

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'm going through it and will edit as I go.

man zfs-redact may help. It certainly is a feature that will need some reading to fully get.

This feature can be used to allow clones of a filesystem to be made
available on a remote system, in the case where their parent need not
(or needs to not) be usable. For example, if a filesystem contains
sensitive data, and it has clones where that sensitive data has been
secured or replaced with dummy data, redacted sends can be used to
replicate the secured data without replicating the original sensitive
data, while still sharing all possible blocks.

Encryption: That's what the man page says, worth testing. I can do that in roughly a week or so, I'll have a test system ready then.

large_dnode: Definitely warn.

project_quota: Really good question. You'd expect software "above" that handles the project IDs.

allocation classes: Not sure I get the question about "generic metadata". As opposed to?

General read-only compatible notes in the list: Maybe add and note that read-only feature works when active, but "can't be read at all" needs to be enabled to remove that restriction, not active.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Okay, so the idea is to use clones plus a parent, so that you have viable datasets (the clones) despite the redaction.

That makes sense as a minimum-effort (in a good way!) of supporting this. No new interfaces are needed beyond one for "don't send these bits", with existing ZFS functionality providing a framework for it to be useful.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
New in beta2:

device_rebuild
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
New in beta2.1:

zstd_compress
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
At least zstd is fairly easy to explain, as far as this topic is concerned.

The only real gotcha, versus other compression options, is that the bootloader doesn't support it yet.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
if a dedup or special dataset is created

That should be vdev, I think.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Fixed, thanks!
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
This process allows for faster recoveries, if the old disk is on its way out.

Yes. I think it also allows for faster recoveries if the new disk is limited in random write performance, such as SMR disks.

It may also be worthwhile to mention that this feature is only supported for mirrors, not for raidz. If memory serves.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Yes. I think it also allows for faster recoveries if the new disk is limited in random write performance, such as SMR disks.

It may also be worthwhile to mention that this feature is only supported for mirrors, not for raidz. If memory serves.
It may be possible with an in-place replacement. I'll have to look into it to figure out whether that is the case.

As for random write performance, that is a good point.
 

TooMuchData

Contributor
Joined
Jan 4, 2015
Messages
188
"the large_dnode feature flag breaks zfs send to systems that do not support this flag."
This is system dependent, not zpool dependent?
Better questions: If I upgrade both source and target zpools will replication tasks continue as usual? If replication gets broken (one pool is upgraded but other not) will it be fixed by upgrading the other zpool?
Thanks for your tireless help to this community.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
"the large_dnode feature flag breaks zfs send to systems that do not support this flag."
This is system dependent, not zpool dependent?
Excellent question. I have no idea and I'll have to get back to you on that one.
Better questions: If I upgrade both source and target zpools will replication tasks continue as usual? If replication gets broken (one pool is upgraded but other not) will it be fixed by upgrading the other zpool?
I have no reason to think otherwise.
 
Joined
Jan 27, 2020
Messages
577
Thank you for this great overview! Since TrueNAS 12 is in mission ciritcal will this resource be updated in the future?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Dumping the compatibility= in OpenZFS >= 2.1 during create here, which is obeyed by upgrade and acted on by status. PR this has been introduced with.

Possible values, can be comma-separated unless legacy:

Code:
legacy (no features, no zpool upgrade)
compat-2018 OR 2018
compat-2019 OR 2019
compat-2020 OR 2020
compat-2021 OR 2021
freebsd-11.0 OR freebsd-11.1
freebsd-11.0 OR freenas-11.0
freebsd-11.2 OR freenas-11.2
freebsd-11.3 OR freebsd-11.4
freebsd-11.3 OR freebsd-12.0
freebsd-11.3 OR freebsd-12.1
freebsd-11.3 OR freebsd-12.2
freebsd-11.3 OR freenas-11.3
freenas-9.10.2
freenas-11.0 OR freenas-11.1
grub2
openzfs-2.0-freebsd OR truenas-12.0
openzfs-2.1-freebsd
openzfs-2.0-linux
openzfs-2.1-linux
openzfsonosx-1.7.0
openzfsonosx-1.8.1
openzfsonosx-1.9.3 OR openzfsonosx-1.9.4
zol-0.6.1
zol-0.6.4
zol-0.6.5
zol-0.7 OR ubuntu-18.04
zol-0.8 OR ubuntu-20.04
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Having had a minor contribution to that, you'd think I would have remembered that I might want to update this resource. I'll throw it on the pile.

What version is TrueNAS 12 approximately on? Some variant of 2.0, right?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yeah TrueNAS Core/SCALE is on 2.0.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
TrueNAS Core 13.0 adds one feature: draid. Not read-only compatible and becomes active when a draid vdev is created. This is ZFS 2.1.4-1
 
Top