jkingaround
Dabbler
- Joined
- Sep 12, 2016
- Messages
- 13
Hi all, looking for some help. I recently was getting unreadable sectors on one of my 4tb disks, and ordered a replacement. Since I no longer have any spare bays (using all 24 bays in my chassis) once I got the replacment, I offlined the old disk, shut it down, put the new drive in and hit replace without the usual burn in I did when I had the extra bays. However, something happened while resilvering and it never fully replaced properly. From replies on a reddit thread I was told it was potentially because the WD RED SMR (WD40EFAX) I bought instead of the older CMR version (WD40EFRX) has issues with ZFS/FreeNAS. I have set up a refund and am planning on buying the new (older version) disk as soon as I can get the return dealt with through amazon. However, since my pool is now degraded I would like to put in the old drive that was starting to fail (but the pool wasn't degraded yet) until I can get the new disk popped in there. However, I'm not sure the best way to go about this as I cannot get the drive offline and don't want to risk anything.
What is the best course of action to swap back to the old pre-failing disk? How do I "properly" do a burn in for the replacement without being able to have both disks plugged in at the same time?
Help please.
Server Specs:
Note the scrub cancelled is actually me stopping the resilvering that tried to happen AGAIN after rebooting to see if that helped anything.
What is the best course of action to swap back to the old pre-failing disk? How do I "properly" do a burn in for the replacement without being able to have both disks plugged in at the same time?
Help please.
Server Specs:
CHASSIS: SUPERMICRO 4U 846E16-R1200B
MOBO: X8DTE-F
RAM: 128GB ECC
CPU: Dual Intel XEON L5520
DRIVES: 16 x 4TB WD Red RAID Z2 | 8 x 8TB WD Red RAID Z2
MOBO: X8DTE-F
RAM: 128GB ECC
CPU: Dual Intel XEON L5520
DRIVES: 16 x 4TB WD Red RAID Z2 | 8 x 8TB WD Red RAID Z2
Note the scrub cancelled is actually me stopping the resilvering that tried to happen AGAIN after rebooting to see if that helped anything.
zpool status -v Chico
output:Code:
pool: Chico state: DEGRADED scan: scrub canceled on Thu Jun 18 21:20:27 2020 config: NAME STATE READ WRITE CKSUM Chico DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 replacing-0 UNAVAIL 0 0 5 7441467098546415565 UNAVAIL 0 0 0 was /dev/gptid/dbd87bda-d718-11e9-b598-000c2920d667 gptid/e981831a-aed7-11ea-aa5d-000c2920d667 FAULTED 0 119 0 too many errors gptid/de312a5c-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/e06a57a6-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/e2b9b37c-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/e4f9aa44-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/e71ecdd5-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/e974b0cb-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 gptid/ebc31519-d718-11e9-b598-000c2920d667 ONLINE 0 0 0 errors: No known data errors
Last edited: