Easiest/safest way to uncompress existing files?

43615

Cadet
Joined
Mar 15, 2024
Messages
3
Situation: My pool was created with LZ4 compression, which turned out to not do much good and cause CPU bottlenecking on sequential operations (old system). I have disabled compression, but that doesn't change the existing files.
Is there a nice and safe way to reprocess all existing files so they're not compressed? I have enough space for a full clone in case that's required.
 

43615

Cadet
Joined
Mar 15, 2024
Messages
3
Some numbers regarding the performance issues: The compression ratio is currently 1.09, effective sequential read speed doesn't get past 300 MB/s (expecting something closer to 1 GB/s, that's the hard limit of the DMI 1.0 x4 link feeding the drives).
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Files will be uncompressed only when they will be written to. For that, you need to re-copy everything. One option would be to create a new dataset that is not compressed and use zfs send | zfs recv to migrate the data to that new dataset. Once there, just keep accessing your data from there. For that, you will need to reconfigure a few things like shares or mounts depending of how you access your data.
 
Joined
Oct 22, 2019
Messages
3,641
LZ4 compression, which turned out to not do much good and cause CPU bottlenecking on sequential operations (old system).
How old and slow is this hardware that LZ4 compression is causing notable slowdowns? :oops: I'd wager such a system is not a good candidate for TrueNAS in general.

Most of your data blocks may in fact not be compressed with LZ4, but rather saved uncompressed, due to the "early abort" feature.
 

43615

Cadet
Joined
Mar 15, 2024
Messages
3
Note my reply at the top, it was approved after your replies. The system is a budget build from an old PC, with an i7-880 (Q57 chipset) and DDR3-1333 memory.
The bottlenecking I'm seeing seems plausible considering some published LZ4 performance figures. The cause might also be overhead on the DMI or the chipset SATA controller failing to keep up.

As for the migration suggestion, is the snapshot send/receive functionality provided by SCALE in a pretty way? "Replication Tasks" looks like the right direction, can you suggest a concrete way of doing it? Also, are you sure it'll recreate the files instead of just making a bytewise copy of the snapshot?
Regardless, I'd prefer an in-place technique. From my cursory understanding, I'm thinking of something like Btrfs's `balance`.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Regardless, I'd prefer an in-place technique. From my cursory understanding, I'm thinking of something like Btrfs's `balance`.
ZFS does not have a way to uncompress in place.

If you have space, disable compression in the affected datasets and;
  1. Rename one file
  2. Copy renamed file back to original file name
  3. Erase renamed file
  4. Repeat until all files have been copied
Of course, if you have snapshots, that complicates the mater because until those snapshots expire, (or are removed), the original data blocks will still be in use. Thus, doubling your storage requirements temporarily.

I do agree with @winnielinnie that some slow downs are PC related, not necessarily compression related. Random, old and non-server style hardware does not necessarily make a fast NAS. Not even with fast storage.
 
Last edited:
Top