No Member Disk when trying to replace a failed drive

lockdown2222

Cadet
Joined
Dec 13, 2023
Messages
2
Hey everyone, I'm a noob to FreeNAS/TrueNAS but have been running it for years. I've run into a couple of drive failures and when I offlined them, replaced the drive with new ones, and tried to replace them with the member disk in the pool utility, there were no disks to choose from. Any idea? What other info can I provide? FreeNAS-11.3-U4.1. I also did -camcontrol devlist and it does in fact see the new drives.

root@freenas[~]# zpool status

pool: Media_Storage

state: DEGRADED

status: One or more devices has experienced an error resulting in data

corruption. Applications may be affected.

action: Restore the file in question if possible. Otherwise restore the

entire pool from backup.

see: http://illumos.org/msg/ZFS-8000-8A

scan: scrub in progress since Sun Mar 20 00:00:11 2022

53.5T scanned at 403M/s, 49.9T issued at 375M/s, 64.9T total

0 repaired, 76.85% done, 0 days 11:40:09 to go

config:



NAME STATE READ WRITE CKSUM

Media_Storage DEGRADED 0 0 22

raidz3-0 DEGRADED 0 0 44

gptid/0bc526bf-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0c063f0a-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0c6414fe-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0cf14688-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

replacing-4 DEGRADED 0 0 0

13006508720922706027 OFFLINE 0 0 0 was /dev/gptid/0d159e6a-d74f-11ea-b2c5-0cc47a1633bc

gptid/2e5162ac-df57-11eb-8cad-0cc47a1633bc ONLINE 0 0 0

gptid/0ccbe454-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0d7f6273-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0dda6a54-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0e269409-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0e47934d-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0e6a729b-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/0eeef249-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1020ded0-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1061ddba-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/10f7ddbd-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/117b511e-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/118f7bf3-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/11e4393d-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/11f4584a-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/12c3ecb4-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/13147353-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/130ea98f-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1385749d-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1425f0b3-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/14cc5494-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/14ebade8-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/14e491bd-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/15544609-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/162cfcca-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/166b758c-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1684b350-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

replacing-31 DEGRADED 0 0 0

16792829716586351974 OFFLINE 0 0 0 was /dev/gptid/16d7846d-d74f-11ea-b2c5-0cc47a1633bc

gptid/2f8b08c8-7a17-11ec-8479-0cc47a1633bc ONLINE 0 0 0

gptid/16ea815e-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/1777b72b-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/17844169-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0

gptid/17d05305-d74f-11ea-b2c5-0cc47a1633bc ONLINE 0 0 0



errors: 6 data errors, use '-v' for a list



pool: freenas-boot

state: ONLINE

scan: scrub repaired 0 in 0 days 00:01:52 with 0 errors on Sun Mar 20 03:46:52 2022

config:



NAME
 

lockdown2222

Cadet
Joined
Dec 13, 2023
Messages
2
Adding a screenshot as well.
 

Attachments

  • Screenshot 2023-12-13 142902.png
    Screenshot 2023-12-13 142902.png
    78.1 KB · Views: 47

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't answer your question, perhaps after I think about it. Or maybe someone else will have an answer.

This is something I don't understand;
scan: scrub in progress since Sun Mar 20 00:00:11 2022
53.5T scanned at 403M/s, 49.9T issued at 375M/s, 64.9T total
0 repaired, 76.85% done, 0 days 11:40:09 to go
Is your date off on that server?

You generally don't want a scrub running when you are trying to replace a disk.


But, if I am reading your zpool status output correctly, you have a less than ideal pool design. In general, RAID-Zx widths should max out about 10 to 12 disks, depending on enclosure and other factors. But, if I am looking at your pool's RAID-Z3 width correctly, it is 36 disks wide. This can cause slow downs, even in disk replacements.

Their is no "cure" / "fix", other than backup your data and re-create your pool with a different layout. Like 3 vDevs of 12 disk RAID-Z2. Yes, 3 more disks "lost" to parity than what you have, but an easier to manage pool.

That said, your disk replacements may not be impacted by a 36 disk wide RAID-Z3.
 
Top