Hi
This is my first time posting here. Please forgive me. Also I am not good at English.
Version TrueNAS-12.0-U8.1
- Motherboard make and model ProLiant DL180 Gen9
- CPU make and model x 2 Xeon E5-2623 v4
- RAM quantity 512GB
- Hard drives, Hard drives, quantity, model numbers, and RAID configuration, including boot drives Hard drives, 12TB 75 quantity, RAID configuration RAID1-0
I have been monitoring smart checks and such, but currently 10 HDDs are fail so running ZFS resilver with fail.
From a large number of checksum errors, the disk which became an error is replaced hard, but ZFS resilver does not finish and a large number of data Permanent errors are coming out. Permanent errors are mostly metadata (<0x176b>:<0xe76edff>) and snapshot data.
Please let me know the best solution. I would like to stop the ZFS resilver and get the configuration right first, but it won't stop the receving. Do I have to detach the pool once to redo the ZFS resilver?
Thanks for support.
This is my first time posting here. Please forgive me. Also I am not good at English.
Version TrueNAS-12.0-U8.1
- Motherboard make and model ProLiant DL180 Gen9
- CPU make and model x 2 Xeon E5-2623 v4
- RAM quantity 512GB
- Hard drives, Hard drives, quantity, model numbers, and RAID configuration, including boot drives Hard drives, 12TB 75 quantity, RAID configuration RAID1-0
I have been monitoring smart checks and such, but currently 10 HDDs are fail so running ZFS resilver with fail.
Code:
raidz1-13 DEGRADED 0 0 0 spare-0 DEGRADED 0 0 1.50M replacing-0 DEGRADED 0 0 0 gptid/e8ff3d6e-168e-11ed-b8f6-70106fb26160 REMOVED 0 0 0 gptid/65e32e7c-1df9-11ed-9999-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/596e1c5e-0ff2-11e8-a67b-70106fb26160 OFFLINE 0 0 0 (awaiting resilver) replacing-1 DEGRADED 0 0 1.50M 17123214133085649548 UNAVAIL 0 0 0 was /dev/gptid/79fbff96-da97-11ec-9f73-70106fb26160 (awaiting resilver) gptid/5ec0327a-1bb9-11ed-877a-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/4cce8af1-1c27-11ed-988e-70106fb26160 ONLINE 0 0 0 (resilvering) replacing-2 DEGRADED 0 0 1.50M 1405824562122750834 UNAVAIL 0 0 0 was /dev/gptid/02c13749-da05-11ec-9f73-70106fb26160 (awaiting resilver) gptid/4aed3f9a-1df5-11ed-9999-70106fb26160 ONLINE 0 0 0 (resilvering) raidz1-14 ONLINE 0 0 0 gptid/d16ad6dc-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 gptid/d189a60f-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 gptid/d1da5576-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 raidz1-15 ONLINE 0 0 0 gptid/f57c7a9e-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 gptid/f5994187-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 gptid/f5f50df1-d3f6-11ea-8572-70106fb26160 ONLINE 0 0 0 raidz1-16 ONLINE 0 0 0 gptid/8db9b2c4-23c9-11eb-929b-70106fb26160 ONLINE 0 0 0 gptid/3d93e187-d3f7-11ea-8572-70106fb26160 ONLINE 0 0 0 gptid/3ded9325-d3f7-11ea-8572-70106fb26160 ONLINE 0 0 0 raidz1-17 DEGRADED 0 0 0 replacing-0 ONLINE 0 0 13 gptid/6438231a-d3f7-11ea-8572-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/005a3b0a-132f-11ed-a431-70106fb26160 ONLINE 0 0 0 (resilvering) spare-1 DEGRADED 0 0 4 replacing-0 ONLINE 0 0 0 gptid/646ddb41-d3f7-11ea-8572-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/746f387f-1bc8-11ed-8526-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/4090f3c3-f9d1-11eb-90ba-70106fb26160 REMOVED 0 0 0 replacing-2 ONLINE 0 0 0 gptid/df420150-a74c-11ec-98a5-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/f19c53bb-13fb-11ed-a431-70106fb26160 ONLINE 0 0 0 (resilvering) raidz1-18 DEGRADED 0 0 0 replacing-0 ONLINE 0 0 3 gptid/187ec90a-f51c-11ec-9f0d-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/78cac375-158c-11ed-acde-70106fb26160 ONLINE 0 0 0 (resilvering) spare-1 DEGRADED 0 0 1 replacing-0 ONLINE 0 0 0 gptid/6a9a40fb-f547-11ec-9f0d-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/ff6d8f06-1bbc-11ed-877a-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/e4d307c0-f9d2-11eb-90ba-70106fb26160 REMOVED 0 0 0 replacing-2 ONLINE 0 0 6 gptid/e926643e-f51c-11ec-9f0d-70106fb26160 ONLINE 0 0 0 (resilvering) gptid/1bab6959-1b52-11ed-80fd-70106fb26160 ONLINE 0 0 0 (resilvering)
From a large number of checksum errors, the disk which became an error is replaced hard, but ZFS resilver does not finish and a large number of data Permanent errors are coming out. Permanent errors are mostly metadata (<0x176b>:<0xe76edff>) and snapshot data.
Please let me know the best solution. I would like to stop the ZFS resilver and get the configuration right first, but it won't stop the receving. Do I have to detach the pool once to redo the ZFS resilver?
Thanks for support.