OpenZFS new capabilities

turment

Dabbler
Joined
Feb 3, 2020
Messages
46
Now that OpenZFS 2.0 has been released as stable, is there any way to increase the pool size by adding a disk?

Having to backup and reimport data to increase pool size is really a PITA.

Second question: is there a way to recompress the pool from lz4 to ZSTD?

Thanks ;)
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Now that OpenZFS 2.0 has been released as stable, is there any way to increase the pool size by adding a disk?

If you're asking about RAIDZ expansion (and I think you are), the answer is still no (for now). https://www.truenas.com/community/threads/raidz-expansion-its-happening.58575

Although with the switch to Open ZFS we're on the path to it once somebody gets around to wiring the test plans and running through a bunch of test cycles to get it on the release track.

Be aware that what you get at the end isn't a pool in 100% great shape though, since all of the existing data is still there with n-1 + parity and only new data gets n + parity blocks. I would not personally feel great about that and would want to re-write all my data (even if it was just to copy it around between datasets inside the pool to get that worked out).
 

turment

Dabbler
Joined
Feb 3, 2020
Messages
46
If you're asking about RAIDZ expansion (and I think you are), the answer is still no (for now).
Thanks for your clear reply.

What about compression "transcoding"? Have I to backup and restore again to have it working on "old" files too?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What about compression "transcoding"? Have I to backup and restore again to have it working on "old" files too?
I had opted out of answering that due to lack of experience with it myself. (I have no idea if there is something that can do that in a better or automatic way)

An option would most likely be to use dataset washing (move to a new dataset in the same pool, then rename that dataset the same as the original that you have deleted after the move of the contents).

Of course you can also purge and restore that content from backup, but that requires 100% of the occupied capacity to exist somewhere else... dataset washing can be done in smaller increments (managing snapshots as you go to avoid bloat).
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I think I qualify as "having enough experience"...
Doing a send-recieve to the same pool, but with different compression settings on the dataset, does "transcode" the compression settings.
It can safe quite a few % going from lz4 to zstd :)
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
ZFS send-recieve is documented elsewhere quite thoroughly.
I would do a disservice to people more versed in ZFS-send-recieve, if I would guesstimate what I used a few months ago (as I barely use it)
 
Top