HeloJunkie
Patron
- Joined
- Oct 15, 2014
- Messages
- 300
Preface: This is in my backup system listed in my signature running TrueNAS-12.0-U3 with 5 x 9 Drive RAIDZ2 VDEVs. I have 140TB usable and about half of that is used.
I got an email saying that my pool was in a degraded state. I logged in and it appears that there is something very strange going on. I have three failing drives, and all of those drives just happen to be in the same VDEV which seems very strange to me. All three are resilvering but one says it is online, one says that it is degraded and one says that it is faulted.
Needless to say, those drives need to be replaced. Interesting to note that these six drives are all Seagate OEM drives, all the rest of the drives in the system are WD.
So I guess I am looking for advice on how best to handle this situation. As I stated above, this is a backup to another unit so if I lost all data on it I would lose no data at all, but replicating 70+TB back from my primary server would be a pain.
Should I just wait to see if it resilvers and then replace the drives one at a time? I am thinking all nine of those drives have to go since three of them have already failed. What is the easiest way to replace ALL drives in a VDEV? From reading, I am thinking it is one at a time.
And advice would be appreciated!
I got an email saying that my pool was in a degraded state. I logged in and it appears that there is something very strange going on. I have three failing drives, and all of those drives just happen to be in the same VDEV which seems very strange to me. All three are resilvering but one says it is online, one says that it is degraded and one says that it is faulted.
Needless to say, those drives need to be replaced. Interesting to note that these six drives are all Seagate OEM drives, all the rest of the drives in the system are WD.
So I guess I am looking for advice on how best to handle this situation. As I stated above, this is a backup to another unit so if I lost all data on it I would lose no data at all, but replicating 70+TB back from my primary server would be a pain.
Should I just wait to see if it resilvers and then replace the drives one at a time? I am thinking all nine of those drives have to go since three of them have already failed. What is the easiest way to replace ALL drives in a VDEV? From reading, I am thinking it is one at a time.
And advice would be appreciated!
Code:
pool: vol1 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sun Apr 10 13:28:10 2022 726G scanned at 556M/s, 321G issued at 246M/s, 97.2T total 7.78G resilvered, 0.32% done, 4 days 18:34:16 to go config: NAME STATE READ WRITE CKSUM vol1 DEGRADED 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/9bed4d8b-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/9fdb0a9f-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/a0c02e7b-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/a2accfe9-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/a45f9c8d-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/a560bed0-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/a7b50f54-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/ab823287-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/adeeb776-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/b4853db5-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/b9e9de9c-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/bb972e33-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/c384679f-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/c298bc4f-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/c71378b5-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/c89c5dc2-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/cb5ae9f7-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/ce0c7f7a-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/d198996f-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/d2664bca-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/d3dc940a-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/d5468178-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/dfad66ba-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/ded262b0-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/e0cddd90-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/e7cb18ae-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/ead47bf6-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 raidz2-3 ONLINE 0 0 0 gptid/edd89401-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/efce3fb4-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/45ca72b8-81c4-11eb-9763-0007433b1890 ONLINE 0 0 0 gptid/9dbcb705-15a4-11e7-9dc2-a0369f52eb66 ONLINE 0 0 0 gptid/f9043142-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/faaca3e0-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/fe51d95e-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/ffdec3a8-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 gptid/fe4c68e2-0f58-11e7-96ea-a0369f52eb66 ONLINE 0 0 0 raidz2-4 DEGRADED 0 0 0 gptid/60f3bb95-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 956 (resilvering) gptid/608bdff7-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/62c7936e-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/63121eac-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/638c4c58-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/64447b50-90d1-11eb-aa66-0cc47abc5340 DEGRADED 145 0 956 too many errors (resilvering) gptid/63b6cfbd-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/64b62b27-90d1-11eb-aa66-0cc47abc5340 ONLINE 0 0 795 gptid/64f75ee0-90d1-11eb-aa66-0cc47abc5340 FAULTED 1.11K 0 2 too many errors (resilvering)