Am i reading the output wrong? To me it seems like the second drive has damaged data as well so it is trying to get the proper data from disk one. That second disk is (almost) dead anyway so why bother?Just a note, resilvering "from" the first one is not the way resilvering works. ZFS walks all its blocks and will use any valid copy of a block it is able to find. It is almost certainly copying valid data from the fail-y drive to the new drive along the way.
Am i reading the output wrong? To me it seems like the second drive has damaged data as well so it is trying to get the proper data from disk one. That second disk is (almost) dead anyway so why bother?
You have a valid point regarding "reading data from first and second disk in order to resilver the third one" BUT what happens when it tries to read something FROM the second/faulty drive? Will it "give up" quickly or will it spend significant amount of time before it "gives up" becuase that sector is unreadable/slow-as-hell ? ... That was my point...
root@truenas[~]# zpool status -v Mirror3TB
pool: Mirror3TB
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Mar 22 09:36:16 2021
1.16T scanned at 13.2M/s, 1.16T issued at 13.1M/s, 2.02T total
1.16T resilvered, 57.20% done, 19:13:03 to go
config:
NAME STATE READ WRITE CKSUM
Mirror3TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/9419f4c7-5e82-11eb-9e20-c86000c2238d ONLINE 0 0 0
gptid/942e4f7e-5e82-11eb-9e20-c86000c2238d ONLINE 3 0 4 (resilvering)
gptid/1c5bcd47-8ae8-11eb-82b9-c86000c2238d ONLINE 0 0 0 (resilvering)
errors: No known data errors
ZFS basically trusts the disks to do whatever is appropriate to read data. NAS/Enterprise drives will only try for so long before reporting a fault to their controller, so ZFS would work as you expect and get valid data from other drives. Consumer drives will try their utmost to provide the user with its (presumably important and non-redundant) data and can take minutes on every faulty sector to salvage what can be salvaged. All of this is perfectly reasonable under its own assumptions.Anyway yes I thought there is some "line" where ZFS says "screw this, i am not reading this garbage anymore, there is no hope...". Not necessarly TLER way but something more "smart?". Apparently there is no such thing unless whole disk dies. So it is up to "us" to decide how long we will let ZFS try-hard to fix whatever is broken. Thanks for clarification :)
ZFS basically trusts the disks to do whatever is appropriate to read data. NAS/Enterprise drives will only try for so long before reporting a fault to their controller, so ZFS would work as you expect and get valid data from other drives. Consumer drives will try their utmost to provide the user with its (presumably important and non-redundant) data and can take minutes on every faulty sector to salvage what can be salvaged. All of this is perfectly reasonable under its own assumptions.
The issue lies with using consumer drives with an enterprise-minded storage system. Hopefully at this point the estimated time to complete is a worst case.
root@truenas[~]# zpool status -v Mirror3TB
pool: Mirror3TB
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Mar 24 13:43:45 2021
345G scanned at 728M/s, 60.9G issued at 129M/s, 2.02T total
60.9G resilvered, 2.94% done, 04:26:48 to go
config:
NAME STATE READ WRITE CKSUM
Mirror3TB ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/9419f4c7-5e82-11eb-9e20-c86000c2238d ONLINE 0 0 0
gptid/1c5bcd47-8ae8-11eb-82b9-c86000c2238d ONLINE 0 0 0
gptid/91c01e19-8c9e-11eb-9d42-c86000c2238d ONLINE 0 0 0 (resilvering)
errors: No known data errors
Once again, add that to the 3TB mirror, let it resilver. Tell ZFS to remove the final 3TB drive, and suddenly the pool will become a 6TB pool.
I think you can handle it from there.