lz4 compression?

Status
Not open for further replies.

finsfree

Dabbler
Joined
Jan 7, 2015
Messages
46
Why are my files not being compressed?

I have created a Volume (Vol1) and a Dataset (Dataset1). The Dataset1 is inheriting the lz4 compression from the Volume (Vol1). However, when I compare a file that is in the Dataset with the same file on my c: drive they are the same size. I do not see where it is being compressed.

I am viewing/comparing the files using File Explorer running on Windows 10. I have a mapped drive to my FreeNAS box (FreeNAS 11.1-U2).

Here is a screen shot of my storage. You can see the lz4 compression is enabled.
upload_2018-3-22_19-46-55.png
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
However, when I compare a file that is in the Dataset with the same file on my c: drive they are the same size.
How are you doing that comparison? In most cases, you'll be comparing the logical uncompressed size.

It's also quite possible that your data is not compressible. Obviously, LZ4 can't magically compress it further.
 

finsfree

Dabbler
Joined
Jan 7, 2015
Messages
46
I'm doing the comparison with File Explorer.

Okay, let me say it like this, "I want to see lz4 compression work"? I want to compare two exact same files and see that the one on the FreeNAS box is smaller (or compressed).

Compressed to me means the file will be smaller. I don't see that happening yet.
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
It won't show that in File Explorer. It will show the logical uncompressed size. The only way I know to compare is to have the exact same files on a hard drive that you have on a dataset in FreeNAS. Then look at the dataset in FreeNAS to see how much space is utilized compared to the other hard drive. But your dataset Dataset1, shows a compression ratio of 1.0x. Thus it is not compressed. This is most likely because the files you have stored there can't compress. What type of files do you have there? Most of the time video, database, and even picture files (as well as others) can't compress much.
 
Last edited by a moderator:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
You can get it from the command line. ls -lsh file will show the number of blocks used on the first column. Although it can be altered, the 'block' is typically 1K, so it shows the number of kilobytes being used. No relation to ZFS blocks.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I don't see that happening yet.
Look at your jails dataset.

You can get it from the command line. ls -lsh file will show the number of blocks used on the first column. Although it can be altered, the 'block' is typically 1K, so it shows the number of kilobytes being used. No relation to ZFS blocks.
You sure that won't just fake it with the logical uncompressed size?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Don't know about the ls -lsh command, but something like this works fine;

Code:
/var/tmp/temp> dd if=/dev/zero bs=100K count=100 of=./zero.dd
100+0 records in
100+0 records out
10240000 bytes (10 MB, 9.8 MiB) copied, 0.00884036 s, 1.2 GB/s
/var/tmp/temp> ls -lh
total 512
-rw------- 1 user users 9.8M Mar 22 23:18 zero.dd
/var/tmp/temp> du -sh .
1.0K   .


Obviously a 10MB file full of zeros is very compressable.

That said, ZFS is smart enough today that it does not even allocate a single block for my silly file. If I understand ZFS correctly, it's able to embed my data into the metadata for the file. So a repeat of the test in a dedicated ZFS dataset showed 1.00x compression ratio :-(. I guess that could be considered a bug.
 
Last edited by a moderator:

finsfree

Dabbler
Joined
Jan 7, 2015
Messages
46
You can get it from the command line. ls -lsh file will show the number of blocks used on the first column. Although it can be altered, the 'block' is typically 1K, so it shows the number of kilobytes being used. No relation to ZFS blocks.

I tried this command but still coming up with the same size as the original file.

I was using SSH.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's also quite possible that your data is not compressible. Obviously, LZ4 can't magically compress it further.

On your client, make a file that's 100 MB of zeros. Use that to test.
 
Status
Not open for further replies.
Top