Patrick_3000
Contributor
- Joined
- Apr 28, 2021
- Messages
- 167
I have an HDD pool consisting of a three-way mirror: two 10 TB Seagate Ironwolf Pro drives that I bought new approximately 1.5 years ago, and a 12 TB NAS drive that I bought used (refurbished) and rebranded around the same time.
Yesterday, during a weekly scrub, SCALE found a checksum error of 1 bit on one of the drives and now says that the pool is unhealthy due to an unrecoverable error. Unfortunately, I am unable to determine which drive had the checksum error, although interestingly, I strongly suspect that it was one of the Ironwolf Pro drives. (Incidentally, I ran another scrub today, and all three drives got a zero checksum with no errors).
The problem is that "zpool status" identifies drives only by what looks to be UUID, and the UUID shown for the drive that had the checksum error does not correspond to any UUID revealed by the "fdisk -l" command, although it is extremely close (but not identical) to the UUID for both Ironwolf Pro drives.
So, does anyone know how to determine which drive had a checksum error from the output of "zpool status"?
Yesterday, during a weekly scrub, SCALE found a checksum error of 1 bit on one of the drives and now says that the pool is unhealthy due to an unrecoverable error. Unfortunately, I am unable to determine which drive had the checksum error, although interestingly, I strongly suspect that it was one of the Ironwolf Pro drives. (Incidentally, I ran another scrub today, and all three drives got a zero checksum with no errors).
The problem is that "zpool status" identifies drives only by what looks to be UUID, and the UUID shown for the drive that had the checksum error does not correspond to any UUID revealed by the "fdisk -l" command, although it is extremely close (but not identical) to the UUID for both Ironwolf Pro drives.
So, does anyone know how to determine which drive had a checksum error from the output of "zpool status"?