Problems about metadata vdevs

neurax

Cadet
Joined
Nov 9, 2023
Messages
6
HI,my system :TrueNAS-SCALE-22.12.3.3,based on pve8.0.2(CPU D1581) ,8*8T SATA and 2*2T NVME
zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------- ----- ----- ----- ----- ----- -----
boot-pool 2.76G 108G 0 0 20.6K 4.10K
mirror-0 2.76G 108G 0 0 20.6K 4.10K
sdl3 - - 0 0 10.4K 2.05K
sdm3 - - 0 0 10.2K 2.05K
---------------------------------------- ----- ----- ----- ----- ----- -----
xms 49.8T 8.80T 104 158 820K 2.84M
raidz1-0 49.6T 8.56T 10 57 168K 1.52M
0998aa16-0a16-4f57-a2e1-6f94374fc745 - - 1 6 22.0K 197K
5dfe9aff-83b2-446d-b089-8b05c0bc4f36 - - 1 7 20.0K 192K
d1c6fede-4a23-41fd-9c93-cbec618bb16f - - 1 6 21.9K 197K
38d22417-ec86-4094-940e-f4117368636c - - 1 6 19.7K 193K
2a1bd060-e0c8-4eb8-90fa-b9a04eea0199 - - 1 7 22.3K 197K
17dd6367-1df6-4b7e-a7c0-780b0f0b08cb - - 1 8 19.9K 192K
2380b09c-b127-4ec7-92c1-277b38da0fee - - 1 8 22.3K 197K
8eeff36a-749a-4aa1-9cc7-3cca1905c9e6 - - 1 6 19.7K 193K
dedup - - - - - -
mirror-4 31.0G 18.5G 1 34 8.02K 580K
9542801b-372a-472e-97e2-dac783f3380c - - 0 15 3.86K 290K
6c547446-4408-41ec-abbf-b1122757ec4a - - 0 19 4.16K 290K
special - - - - - -
mirror-2 102G 126G 92 44 643K 506K
622fdec2-6a56-4280-9607-8367b74ddf58 - - 45 21 317K 253K
d00af9d8-927b-4526-9656-382a024767b3 - - 46 22 327K 253K
logs - - - - - -
mirror-3 596K 199G 0 0 2 27.4K
e5335635-a65d-42c8-84e1-8e2a54e5cd74 - - 0 0 1 13.7K
f8127364-b38b-4ba8-a2ea-fb3193d76970 - - 0 0 1 13.7K
cache - - - - - -
290ff39a-62fa-4310-be8a-1aeb72ccafce 121G 79.3G 90 2 143K 247K
126a498c-4af4-4a93-b23d-8b3484dfa1f0 122G 78.3G 99 2 151K 247K
---------------------------------------- ----- ----- ----- ----- ----- -----

One day I found metadata only used 103G,so I try to change a small one.Then create a test system.

Same Truenas ver,same PVE8.0.2.
zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------- ----- ----- ----- ----- ----- -----
boot-pool 5.00G 26.0G 0 1 617 20.6K
sda3 5.00G 26.0G 0 1 617 20.6K
---------------------------------------- ----- ----- ----- ----- ----- -----
test 27.1G 21.9G 0 1 29 10.8K
mirror-0 27.1G 2.43G 0 0 13 1.01K
8d3f4a5f-099d-4d58-b80c-6a836bbf68e0 - - 0 0 8 516
a193f17c-12e8-40ed-80fa-326b09bb9484 - - 0 0 5 516
special - - - - - -
mirror-5 79.4M 19.4G 0 1 13 9.69K
c484d377-d805-4ba5-a91e-9dca49c34df5 - - 0 0 6 4.85K
1a8ad8c5-7f38-4b8c-8458-3568788bc6ce - - 0 0 6 4.85K
logs - - - - - -
mirror-7 0 31.5G 0 0 3 129
ac8f3c48-06e5-499e-84a8-08fbcd88f34a - - 0 0 1 64
0dbd932d-688b-4baa-9f51-5833771bb37c - - 0 0 1 64
---------------------------------------- ----- ----- ----- ----- ----- -----
Then I can see remove here in GUI.
微信截图_20231113142050.jpg


and 2* 32G vdev,then can still remove 20G vdev mirror,or remove all of two mirror.

I add 2*100G vdev to first system,and they can`t be remove. Old 230G metadata vdev can`t remove too.

special - - - - - -
mirror-2 102G 126G 92 44 643K 506K
622fdec2-6a56-4280-9607-8367b74ddf58 - - 45 21 317K 253K
d00af9d8-927b-4526-9656-382a024767b3 - - 46 22 327K 253K
mirror-5 83.2M 99.4G 0 27 698 302K
038fd86e-0f39-4769-b3de-33980279d58d - - 0 13 398 147K
ca99fa06-1613-4090-8f98-11fd562ac13c - - 0 13 278 147K


I fill some data to test pool. about 95% used ,and create a share,the metadata vdev still can be remove.
I can`t understand.Did someone meet this before?Thanks!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
When using RAID-Z1 as the data portion of the ZFS pool, you can not remove critical members, (or shrink), of special vDevs. Special vDevs are De-Dup, Special or Small file.

In your example, you use Mirror instead of RAID-Z1, thus, you are able to remove Mirror devices.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Raidz1 is advised against with large HDDs, and the pool seems to have been designed to tick every possible extra box: special, dedup (distinct from special!), log (any actual sync write load?) and cache (is there is enough RAM to support it?).
But the hardware is listed as "8*8T SATA and 2*2T NVME". Should we understand that the four mirrors vdevs are actually made of partitions from two (consumer?) NVMe drives? This is a disaster waiting to happen…
 

neurax

Cadet
Joined
Nov 9, 2023
Messages
6
When using RAID-Z1 as the data portion of the ZFS pool, you can not remove critical members, (or shrink), of special vDevs. Special vDevs are De-Dup, Special or Small file.

In your example, you use Mirror instead of RAID-Z1, thus, you are able to remove Mirror devices.
Thanks a lot,I understand now.
 

neurax

Cadet
Joined
Nov 9, 2023
Messages
6
Raidz1 is advised against with large HDDs, and the pool seems to have been designed to tick every possible extra box: special, dedup (distinct from special!), log (any actual sync write load?) and cache (is there is enough RAM to support it?).
But the hardware is listed as "8*8T SATA and 2*2T NVME". Should we understand that the four mirrors vdevs are actually made of partitions from two (consumer?) NVMe drives? This is a disaster waiting to happen…
In fact,I have 4 NVME SSD,made two mirror in pve ,and create vdisk to Truenas,then made mirror again in truenas.
I use a custom motherboard ,CPU D-1581 , 128GB ECC RAM. Use to stor some movies,and a jellyfin media server.
Other important files use a 16T SATA mirror,without special, dedup,log vdev.
so I think it`s OK,thanks a lot.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
,I have 4 NVME SSD,made two mirror in pve ,and create vdisk to Truenas,then made mirror again in truenas
That is terrible and pointless. It's added complexity for no gain and a lot more pain. How are the SATA disks attached? Please don't say "virtual disks" or "storage pass-through" or anything other than "LSI SAS HBA via PCIe passthrough to the guest".
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
so I think it`s OK,thanks a lot.
None of that is "OK". Virtual disks from a hypervisor are a big, flashing, red NO.
And 16 TB HDDs in 2-way mirror are a cause for concern as well, due to the amont of data which would be without redundancy after a drive loss and the time to rebuild. Let's hope you have a safe backup of that.
 

neurax

Cadet
Joined
Nov 9, 2023
Messages
6
That is terrible and pointless. It's added complexity for no gain and a lot more pain. How are the SATA disks attached? Please don't say "virtual disks" or "storage pass-through" or anything other than "LSI SAS HBA via PCIe passthrough to the guest".
Yes,it`s "storage pass-through" and it`s too late to change it.
 

neurax

Cadet
Joined
Nov 9, 2023
Messages
6
None of that is "OK". Virtual disks from a hypervisor are a big, flashing, red NO.
And 16 TB HDDs in 2-way mirror are a cause for concern as well, due to the amont of data which would be without redundancy after a drive loss and the time to rebuild. Let's hope you have a safe backup of that.
You r right ,I`ll make a backup ASAP!
 
Top