I have created a pool with 2 vdevs of 6 drives, each drive is 6TiB.
I created a 15TiB zvol with 16kb blocks on the pool and the pool read ~15.2TB of useage which is fine
I wrote approximately 6.5TB to the zvol and took some snapshots
Shortly after my transfer was complete, I lost a disk, just totally died. So I removed the disk, added a spare I had bought, and resilvered, all seemed to be well
Unfortunately my pool now reads:
HEALTHY
: 21.75 TiB (52%) Used / 20.19 TiB Free
Where has the excess space gone?
I erased the previously taken snapshots as they were just for testing while the data was transfering and started a schedule of hourly snapshots
I created a 15TiB zvol with 16kb blocks on the pool and the pool read ~15.2TB of useage which is fine
I wrote approximately 6.5TB to the zvol and took some snapshots
Shortly after my transfer was complete, I lost a disk, just totally died. So I removed the disk, added a spare I had bought, and resilvered, all seemed to be well
Unfortunately my pool now reads:
HEALTHY
: 21.75 TiB (52%) Used / 20.19 TiB Free
Where has the excess space gone?
I erased the previously taken snapshots as they were just for testing while the data was transfering and started a schedule of hourly snapshots
Code:
NAME USED AVAIL REFER MOUNTPOINT freenas-boot 1.00G 91.5G 23K none freenas-boot/ROOT 1023M 91.5G 23K none freenas-boot/ROOT/Initial-Install 1K 91.5G 1020M legacy freenas-boot/ROOT/default 1023M 91.5G 1020M legacy store 21.8T 20.2T 176K /mnt/store store/.system 36.5M 20.2T 192K legacy store/.system/configs-ee21f3733ab34d689aed5079e9ac663b 1.67M 20.2T 1.67M legacy store/.system/cores 799K 20.2T 799K legacy store/.system/rrd-ee21f3733ab34d689aed5079e9ac663b 32.7M 20.2T 32.7M legacy store/.system/samba4 216K 20.2T 216K legacy store/.system/syslog-ee21f3733ab34d689aed5079e9ac663b 767K 20.2T 767K legacy store/.system/webui 176K 20.2T 176K legacy store/test2 256K 20.2T 256K /mnt/store/test2 store/test1 256K 20.2T 256K /mnt/store/test1 store/main 21.8T 35.4T 6.52T -
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT freenas-boot 95.5G 1.00G 94.5G - - 0% 1% 1.00x ONLINE - store 65T 9.79T 55.2T - - 0% 15% 1.00x ONLINE /mnt
Code:
NAME USED AVAIL REFER MOUNTPOINT freenas-boot/ROOT/default@2020-05-11-10:28:53 3.03M - 1020M - store/main@auto-2020-05-19_17-00 0 - 6.52T - store/main@auto-2020-05-19_18-00 0 - 6.52T -
Code:
pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 00:00:03 with 0 errors on Mon May 18 03:45:03 2020 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 ada1p2 ONLINE 0 0 0 errors: No known data errors pool: store state: ONLINE scan: scrub in progress since Tue May 19 17:55:14 2020 3.90T scanned at 1.59G/s, 2.71T issued at 1.11G/s, 9.79T total 0 repaired, 27.67% done, 0 days 01:49:14 to go config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/9b773def-99da-11ea-ad82-f4f26d034739.eli ONLINE 0 0 0 gptid/f89b876d-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f771128b-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f79a743e-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f8e7394a-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f939a350-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/f8455b43-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f7d29c3e-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f8baee36-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f792359c-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f8de70e8-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 gptid/f9169dbd-9388-11ea-ab8d-f4f26d034739.eli ONLINE 0 0 0 errors: No known data errors