As most know, for performance reasons one should not fill a zfs pool above 80% utilization (disk space).
However ive always wondered, why is it ok for zfs to fill a vDEV up well beyond 80% (ie in one 2x vDEV pool, example one vDEV is currently up to 97%), but users/we should not fill a POOL above 80%?
(i realize one is a vDEV and the other is a the entire pool, so perhaps that is the reason, but ive always wondered this).
here is an example im referring to (fyi, this pool started out as a 1 vDEV pool, once it got to ~70% disk usage, i added a 2nd vDEV, both vDEVs are exact same disks/size):
(im just curious as to the answer/reason, im not having any issues or problems, and performance is still great on this pool)
thanks
However ive always wondered, why is it ok for zfs to fill a vDEV up well beyond 80% (ie in one 2x vDEV pool, example one vDEV is currently up to 97%), but users/we should not fill a POOL above 80%?
(i realize one is a vDEV and the other is a the entire pool, so perhaps that is the reason, but ive always wondered this).
here is an example im referring to (fyi, this pool started out as a 1 vDEV pool, once it got to ~70% disk usage, i added a 2nd vDEV, both vDEVs are exact same disks/size):
(im just curious as to the answer/reason, im not having any issues or problems, and performance is still great on this pool)
thanks
Code:
root@freenas:~/EMAILscripts # zpool list -v he8x8TBz2 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT he8x8TBz2 116T 65.1T 50.9T - - 13% 56% 1.00x ONLINE /mnt raidz2 58T 56.4T 1.63T - - 27% 97% gptid/ab4a8be7-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/ac4d1939-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/ad4d7a28-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/ae4b5c46-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/af54390e-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/b05a4b41-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/b15cf9b3-c451-11e9-bbf0-00259084f1c8 - - - - - - - gptid/b2675d33-c451-11e9-bbf0-00259084f1c8 - - - - - - - raidz2 58T 8.74T 49.3T - - 0% 15% gptid/e1eba74e-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/e34d7ec3-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/e4a114e3-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/e608021e-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/e727453f-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/eb0401da-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/eed1a7b2-d1bc-11e9-bae4-00259084f1c8 - - - - - - - gptid/f013e8e4-d1bc-11e9-bae4-00259084f1c8 - - - - - - - log - - - - - - gptid/ed098586-c7bc-11e9-975e-00259084f1c8 15.5G 728K 15.5G - - 0% 0% ZPOOL STATUS: pool: he8x8TBz2 state: ONLINE scan: scrub repaired 0 in 1 days 07:56:31 with 0 errors on Wed Sep 11 12:55:02 2019 config: NAME STATE READ WRITE CKSUM he8x8TBz2 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/ab4a8be7-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/ac4d1939-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/ad4d7a28-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/ae4b5c46-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/af54390e-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/b05a4b41-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/b15cf9b3-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 gptid/b2675d33-c451-11e9-bbf0-00259084f1c8 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/e1eba74e-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/e34d7ec3-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/e4a114e3-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/e608021e-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/e727453f-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/eb0401da-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/eed1a7b2-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 gptid/f013e8e4-d1bc-11e9-bae4-00259084f1c8 ONLINE 0 0 0 logs gptid/ed098586-c7bc-11e9-975e-00259084f1c8 ONLINE 0 0 0 errors: No known data errors