TrueNAS core special/metadata vdev underutilized?

CDRG

Dabbler
Joined
Jun 12, 2020
Messages
18
I've just rejiged my NAS by creating 11 vdevs of two disk HDD mirrors, a three-wide SSD mirror special vdev with 2x HDD spares and 1x SSD spare.

I've set both datasets I have to 32KiB for the metadata block size. However, after reloading all the data to the corresponding datasets, I have the following, truncated for brevity:

truenas[~]# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT TEH 121T 57.7T 63.2T - - 0% 47% 1.00x ONLINE /mnt ... special - - - - - - - - - mirror-13 928G 38.7G 889G - - 3% 4.17% - ONLINE gptid/66aaf24d-3c49-11ee-bc86-000c297d3363 932G - - - - - - - ONLINE gptid/66acf448-3c49-11ee-bc86-000c297d3363 932G - - - - - - - ONLINE gptid/66b04e38-3c49-11ee-bc86-000c297d3363 932G - - - - - - - ONLINE


truenas[~]# zdb -Lbbbs -U /data/zfs/zpool.cache TEH ... Block Size Histogram block psize lsize asize size Count Size Cum. Count Size Cum. Count Size Cum. 512: 192K 95.8M 95.8M 192K 95.8M 95.8M 0 0 0 1K: 259K 299M 395M 259K 299M 395M 0 0 0 2K: 215K 570M 965M 215K 570M 965M 0 0 0 4K: 2.01M 8.30G 9.24G 259K 1.42G 2.37G 1.34M 5.37G 5.37G 8K: 1.98M 22.0G 31.2G 240K 2.64G 5.01G 2.02M 17.6G 23.0G 16K: 1.17M 25.8G 57.0G 391K 7.86G 12.9G 2.41M 54.6G 77.6G 32K: 2.57M 117G 174G 2.49M 83.8G 96.7G 2.58M 117G 195G 64K: 7.04M 652G 826G 334K 28.5G 125G 7.06M 653G 848G 128K: 150M 19.0T 19.8T 158M 19.8T 19.9T 150M 19.0T 19.8T 256K: 152M 37.9T 57.7T 154M 38.6T 58.5T 152M 37.9T 57.7T 512K: 0 0 57.7T 0 0 58.5T 0 0 57.7T 1M: 0 0 57.7T 0 0 58.5T 0 0 57.7T 2M: 0 0 57.7T 0 0 58.5T 0 0 57.7T 4M: 0 0 57.7T 0 0 58.5T 0 0 57.7T 8M: 0 0 57.7T 0 0 58.5T 0 0 57.7T 16M: 0 0 57.7T 0 0 58.5T 0 0 57.7T

So I'm either doing something incorrectly here, which is entirely possible/probable, or something else is odd as by simple math, that special vdev should have more than the 38GB it suggests.

That said, my assumption is that the utilization/filling of said special vdev is automatic, and the simple tweak to the config to adjust the block size to utilize it would be all that's required.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
the utilization/filling of said special vdev is automatic
Only for new/modified blocks.

If you want to retrofit that to all files, you'll need to do something to force that to happen.

Either copy all data off and back on in some kind of backup/restore test operation or use a rebalancing script:

 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Sure, I missed the significance of that bit.

What is the recordsize of the dataset(s)?

It seems that a reasonable number for the special_small_blocks value is 128K if you have a recordsize higher than that.

Metadata is not going to be enormous compared to the data in your pool though, so unless you have bucketloads of small files, you wouldn't be expecting to see a particularly full special VDEV.
 

CDRG

Dabbler
Joined
Jun 12, 2020
Messages
18
Record size for one was 128k. For the second is 256k as I was just testing some things in that respect. However, given the histogram, looking at 32k and smaller, unless I’m reading it wrong, those values suggest said vdev should be more utilized than it is.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Not sure if that histogram is taking compression into account which the zpool list will certainly be.
 

CDRG

Dabbler
Joined
Jun 12, 2020
Messages
18
Fair point. Compression ratio is 1.00 on both datasets and 1.01 on the pool itself, so I’m assuming that’s not coming into play here.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, well I'm out of ideas as to why it's not matching up if you did everything you said.

Next step is a github issue on the OpenZFS project.

Or if you're lucky, @jgreco or another experienced forum member may have something to say about it.
 
Top