ZFS Block Allocation Algorithms and Possible Relationship to Fragmentation?

HarryMuscle

Contributor
Joined
Nov 15, 2021
Messages
161
For those familiar with the two block allocation algorithms used by ZFS (first fit and best fit), does anyone know the reasons why the algorithm is changed at 96% pool capacity (technically I believe it's 96% metaslab capacity which doesn't always translate to 96% pool capacity)? I've read comments that usually seen to hint at reducing fragmentation when the pool gets really full but I've also read that the best fit algorithm increases fragmentation. Based on my understanding of the two algorithms, it would seem that the first fit algorithm would be best at reducing file and free space fragmentation because it searches for the first available block large enough for the data that is closest to and after the previous block (although I would love someone to confirm this understanding cause I might be incorrect ... I am not 100% that it's after the previous block, it might be after something else, but after the previous block makes the most sense). On the other hand the best fit algorithm searches for the smallest block that still fits the data in, I assume, the current metaslab (I'm not sure if it searches other metaslabs before making a decision on which block to use). That would mean that while it might reduce free space fragmentation by filling in empty spaces in the metaslab it actually increases file fragmentation because it will spread the file across blocks that aren't close together but that fit the empty space best. That brings us back to the original question, why use the best fit algorithm after the pool is 96% full? The only answer I can think of is that it's too utilize the remaining space in the most efficient manner at the expense of file fragmentation (and performance).

Thanks,
Harry
 
Top