ZFS large recordsize use case

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
I have been benchmarking pools with different recordsize for a workload of sequential read and write for very large files. I find that throughput increases with recordsize up to 1M. I tried larger sizes of 2M, 4M, and 16M, but it seems like 1M is the sweet spot. I noticed the warning:
Code:
# sysctl -d vfs.zfs.max_recordsize
vfs.zfs.max_recordsize: Maximum block size.  Expect dragons when tuning this.

With other filesystems, I have seen throughput increase all the way through 16M block sizes. I'm curious, did the ZFS team choose this intentionally and optimize around 1M blocks?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I'm not sure if @Allan Jude has any comment on that, but I imagine it may have something to do with optimising the parity calculation.
 
Top