Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.
Resource icon

ZFS Feature Flags in FreeNAS

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,781
I'm going through it and will edit as I go.

man zfs-redact may help. It certainly is a feature that will need some reading to fully get.

This feature can be used to allow clones of a filesystem to be made
available on a remote system, in the case where their parent need not
(or needs to not) be usable. For example, if a filesystem contains
sensitive data, and it has clones where that sensitive data has been
secured or replaced with dummy data, redacted sends can be used to
replicate the secured data without replicating the original sensitive
data, while still sharing all possible blocks.
Encryption: That's what the man page says, worth testing. I can do that in roughly a week or so, I'll have a test system ready then.

large_dnode: Definitely warn.

project_quota: Really good question. You'd expect software "above" that handles the project IDs.

allocation classes: Not sure I get the question about "generic metadata". As opposed to?

General read-only compatible notes in the list: Maybe add and note that read-only feature works when active, but "can't be read at all" needs to be enabled to remove that restriction, not active.
 
Last edited:

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949
Okay, so the idea is to use clones plus a parent, so that you have viable datasets (the clones) despite the redaction.

That makes sense as a minimum-effort (in a good way!) of supporting this. No new interfaces are needed beyond one for "don't send these bits", with existing ZFS functionality providing a framework for it to be useful.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,781
New in beta2:

device_rebuild
 
Last edited:

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,781
New in beta2.1:

zstd_compress
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949
At least zstd is fairly easy to explain, as far as this topic is concerned.

The only real gotcha, versus other compression options, is that the bootloader doesn't support it yet.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,781
if a dedup or special dataset is created
That should be vdev, I think.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949
Fixed, thanks!
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,781
This process allows for faster recoveries, if the old disk is on its way out.
Yes. I think it also allows for faster recoveries if the new disk is limited in random write performance, such as SMR disks.

It may also be worthwhile to mention that this feature is only supported for mirrors, not for raidz. If memory serves.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949
Yes. I think it also allows for faster recoveries if the new disk is limited in random write performance, such as SMR disks.

It may also be worthwhile to mention that this feature is only supported for mirrors, not for raidz. If memory serves.
It may be possible with an in-place replacement. I'll have to look into it to figure out whether that is the case.

As for random write performance, that is a good point.
 

TooMuchData

Member
Joined
Jan 4, 2015
Messages
96
"the large_dnode feature flag breaks zfs send to systems that do not support this flag."
This is system dependent, not zpool dependent?
Better questions: If I upgrade both source and target zpools will replication tasks continue as usual? If replication gets broken (one pool is upgraded but other not) will it be fixed by upgrading the other zpool?
Thanks for your tireless help to this community.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,949
"the large_dnode feature flag breaks zfs send to systems that do not support this flag."
This is system dependent, not zpool dependent?
Excellent question. I have no idea and I'll have to get back to you on that one.
Better questions: If I upgrade both source and target zpools will replication tasks continue as usual? If replication gets broken (one pool is upgraded but other not) will it be fixed by upgrading the other zpool?
I have no reason to think otherwise.
 
Top