compress of data on zfs

Status
Not open for further replies.

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
May is it bug?
Look
My system is zfs + zvol + iscsi.
I am getting different performance.
1. If i turn on LZ4 compression i will get approximately 300 mbps write performance via iscsi. The speed all time change from 100 mbps up to 500 mbps. Processor utilization is very low.
2. If i turn on GZIP compressin (does not matter what level), i will get 900 mbps write speed and it is change from 600 up to 1000 mbps. Processor utilization is high, up to 90% time.

What is it?
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
May is it bug?
Look
My system is zfs + zvol + iscsi.
I am getting different performance.
1. If i turn on LZ4 compression i will get approximately 300 mbps write performance via iscsi. The speed all time change from 100 mbps up to 500 mbps. Processor utilization is very low.
2. If i turn on GZIP compressin (does not matter what level), i will get 900 mbps write speed and it is change from 600 up to 1000 mbps. Processor utilization is high, up to 90% time.

What is it?
Sorry for unhumble question, but how did you get such a performance? What's your hardware specs and how many nics do you use?
I'm getting 200mbs per gigabit nic. Also frequent "performance deteriorated" isssues in esxi logs
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
Sorry for unhumble question, but how did you get such a performance? What's your hardware specs and how many nics do you use?
I'm getting 200mbs per gigabit nic. Also frequent "performance deteriorated" isssues in esxi logs


Xeon e5420, RAM 20 GB, 6x2 TB SATA in raidz2, dual intel gigabit ethernet, work 2 ports with MPIO round robin, SYNC=disabled.
I get performance from freenas reporting graphs.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Almost identical configuration. Though i don't dare to sync off and ordinary raid1 :smile:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
If sync is standart the write speed is little bit slower..

Haha.. if sync=standard pool performance can range from slightly slower to immensely slower. Good luck with your pool. sync=disabled is just asking for major problems.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
<...>
I am getting different performance.
1. If i turn on LZ4 compression i will get approximately 300 mbps write performance via iscsi. The speed all time change from 100 mbps up to 500 mbps. Processor utilization is very low.

2. If i turn on GZIP compressin (does not matter what level), i will get 900 mbps write speed and it is change from 600 up to 1000 mbps. Processor utilization is high, up to 90% time.

What is it?

First make sure that your test data is not composed solely of zeros or something else that is highly compressible as that will skew your performance results.

As you have found, LZ4 does not compress as well as GZIP but it uses significantly less CPU (especially when decompressing).

More info about LZ4 in ZFS can be found here:

http://freebsdnow.blogspot.com/2013/07/freebsd-92-feature-highlight-zfs-lz4.html
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
First make sure that your test data is not composed solely of zeros or something else that is highly compressible as that will skew your performance results.

As you have found, LZ4 does not compress as well as GZIP but it uses significantly less CPU (especially when decompressing).

More info about LZ4 in ZFS can be found here:

http://freebsdnow.blogspot.com/2013/07/freebsd-92-feature-highlight-zfs-lz4.html


I have tested more times on real data - 5 TB of users files (doc, xls, pdf, jpg, cad etc...), but for testing maximum write speed i use big files of backup.
Speed result is different.
GZIP use cpu 90%, LZ4 use < 30%

UPD: It is not bad for me, i will use gzip but i think may be is it bug? Many people turn ON LZ4 because everywhere LZ4 recommended to use and then people get trouble with write speed performance over ISCSI (cifs or NFC)..
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
I have tested more times on real data - 5 TB of users files (doc, xls, pdf, jpg, cad etc...), but for testing maximum write speed i use big files of backup.
Speed result is different.
GZIP use cpu 90%, LZ4 use < 30%

The CPU utilization difference between GZIP and LZ4 you see is not a bug. As you have found, GZIP compresses better but uses much more CPU.

Since enabling compression makes such a big difference in the rate you can write data to your FreeNAS system, I think your disk subsystem is not fast enough (or is having problems). A single RAIDZ2 vdev is not the fastest configuration.

It would be interesting to see your transfer rate with no compression enabled at all to get a better baseline. Also, just to be sure, try testing without MPIO to rule that out as the cause of the problem. You aren't using ethernet jumbo frames are you?

I get performance from freenas reporting graphs.

which reporting graph are you looking at? The Network graph? I don't think you should rely only on the FreeNAS Reporting graphs for benchmark testing. Those graphs display averages over time. It looks like the graph data is only measured every 130 seconds on my system.

I suggest that you find a different way to measure write performance over the network. Does your source system have a way to measure the write speed during a transfer? If not, measure the total time to write a large file and use that to calculate your transfer rate over time. Oh -- can your source system send data fast enough to completely rule it out as the cause of the bottleneck?

You have 6 x 2 TB in RAIDZ2? That would give you a bit less then 8 TB of ZFS pool space. You said that you have 5 TB of user files on there already. Keep in mind that ZFS write performance goes way down if the pool gets too full (above 80% or 90% or whatever the current recommended upper limit is) because ZFS switches to a different method of choosing free disk sectors to write to.
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
The CPU utilization difference between GZIP and LZ4 you see is not a bug. As you have found, GZIP compresses better but uses much more CPU.

Since enabling compression makes such a big difference in the rate you can write data to your FreeNAS system, I think your disk subsystem is not fast enough (or is having problems). A single RAIDZ2 vdev is not the fastest configuration.

It would be interesting to see your transfer rate with no compression enabled at all to get a better baseline. Also, just to be sure, try testing without MPIO to rule that out as the cause of the problem. You aren't using ethernet jumbo frames are you?



which reporting graph are you looking at? The Network graph? I don't think you should rely only on the FreeNAS Reporting graphs for benchmark testing. Those graphs display averages over time. It looks like the graph data is only measured every 130 seconds on my system.

I suggest that you find a different way to measure write performance over the network. Does your source system have a way to measure the write speed during a transfer? If not, measure the total time to write a large file and use that to calculate your transfer rate over time. Oh -- can your source system send data fast enough to completely rule it out as the cause of the bottleneck?

You have 6 x 2 TB in RAIDZ2? That would give you a bit less then 8 TB of ZFS pool space. You said that you have 5 TB of user files on there already. Keep in mind that ZFS write performance goes way down if the pool gets too full (above 80% or 90% or whatever the current recommended upper limit is) because ZFS switches to a different method of choosing free disk sectors to write to.



The difference is too obvious even without syntetic tests. Try own!
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
Not sure what version of FreeNAS you are currently running, but FreeNAS 9.2.1.6 was released on July 3rd. Some items of interest in this release that may apply to you are "An experimental in-kernel iSCSI target" and "various iSCSI fixes".

More info here: http://www.freenas.org/whats-new/2014/07/freenas-9-2-1-6-release-is-available.html


i use last version of freenas. istgt works more better then an experimental iscsi.
If you use an experimental with autotune settings transfer speed floats and drops to 0.
 
Status
Not open for further replies.
Top