I have tested more times on real data - 5 TB of users files (doc, xls, pdf, jpg, cad etc...), but for testing maximum write speed i use big files of backup.
Speed result is different.
GZIP use cpu 90%, LZ4 use < 30%
The CPU utilization difference between GZIP and LZ4 you see is not a bug. As you have found, GZIP compresses better but uses much more CPU.
Since enabling compression makes such a big difference in the rate you can write data to your FreeNAS system, I think your disk subsystem is not fast enough (or is having problems). A single RAIDZ2 vdev is not the fastest configuration.
It would be interesting to see your transfer rate with no compression enabled at all to get a better baseline. Also, just to be sure, try testing without MPIO to rule that out as the cause of the problem. You aren't using ethernet jumbo frames are you?
I get performance from freenas reporting graphs.
which reporting graph are you looking at? The Network graph? I don't think you should rely only on the FreeNAS Reporting graphs for benchmark testing. Those graphs display averages over time. It looks like the graph data is only measured every 130 seconds on my system.
I suggest that you find a different way to measure write performance over the network. Does your source system have a way to measure the write speed during a transfer? If not, measure the total time to write a large file and use that to calculate your transfer rate over time. Oh -- can your source system send data fast enough to completely rule it out as the cause of the bottleneck?
You have 6 x 2 TB in RAIDZ2? That would give you a bit less then 8 TB of ZFS pool space. You said that you have 5 TB of user files on there already. Keep in mind that ZFS write performance goes way down if the pool gets too full (above 80% or 90% or whatever the current recommended upper limit is) because ZFS switches to a different method of choosing free disk sectors to write to.