Degraded Pool after replacing disks

DomCon

Cadet
Joined
Feb 2, 2023
Messages
7
Hi Everyone,

Custom Hardware build of TrueNAS
TrueNAS-12.0-U3
24 GB of RAM
2x Intel(R) Xeon(R) CPU E5603 @ 1.60GHz
36x Disks
34x Disks members of Pool called "DATA"

RESILVER Status below:

Status: FINISHED
Errors: 0
Date: 2023-01-16 19:29:11

Pool is an degraded state.
2x disks replaced but still showing a Degraded state.
2x Spares show as Unavailable, I believe they are in use.

The GPTID of the offline disks match the GPTID assigned to the spare disks.

What are the steps required to get the Pool back to a healthy state (see pics)?
Degraded Disks3.jpg

Degraded Disks.jpg
Degraded Disks2.jpg


Please let me know what other information is required.

Thanks in advance,

DC
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Oh my, is that a 36 disk wide vdev? You're going to have more problems than just disk replacement with that arrangement.

A full list of your hardware is required before an meaningful help can be provided.
 

DomCon

Cadet
Joined
Feb 2, 2023
Messages
7
Hi Jailer,

Thanks for your reply, see specs below, let me know if any further details are required:

TrueNAS-12.0-U3
Supermicro X8DT6-A-IS018
24GB 12x SAMSUNG 2GB PC3L-10600R DDR3-1333 REGISTERED ECC 1RX8 CL9 240 PIN 1.35V LOW VOLTAGE MEMORY MODULE
2x Intel(R) Xeon(R) CPU E5603 @ 1.60GHz
36x 3TB ATA Hitachi HUA72303
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Wow, how did you guys come to the decision to make such a wide vdev? It's not really a recommended practice to make such an extremely wide vdev in ZFS.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Wow, how did you guys come to the decision to make such a wide vdev? It's not really a recommended practice to make such an extremely wide vdev in ZFS.
Not to mention 24GB of RAM with a 108TB pool. @DomCon you need to do some research and seriously re think your approach to your server setup, it is destined to fail.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Not to mention 24GB of RAM with a 108TB pool. @DomCon you need to do some research and seriously re think your approach to your server setup, it is destined to fail.
Ah yes... I totally missed that one. It's really interesting that someone could spare enough money for a 36-drive array and dual Xeon's but only enough money for 24 GB RAM. My lowly 4x 6TB array has 32 GB RAM lol.
 

DomCon

Cadet
Joined
Feb 2, 2023
Messages
7
Hi Guys, was just trying to maximize some free hardware I was able to obtain.

I will look into bumping up the RAM, any suggestions on the process to correct my issue with the replaced drives and the spares?

Thanks,

Dom
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
These are super old Xeons, so outside of drives the setup is probably very cheap.
I think that all is fine for now, and all that's needed is to finish the replacement by returning the two drives to spare status. TrueNAS requires the administrator to do it—and acknowledge the issue is solved.

For the long term, what's needed is a way to recreate the pool with a better geometry. It is advised to keep raidz# vdevs no larger than 10-12 drives. Or go for dRAID.
 

DomCon

Cadet
Joined
Feb 2, 2023
Messages
7
@Etorix what is the process to finish the replacement?

Thanks in advance,

Dom
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I've never done it because I do not use spares myself, but it should be as simple as clicking on the 3-dot menu for the /dev/gptid and picking up a suitable option.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
See white pages 52-53-54 in attachment.
 

Attachments

  • ZFS_Administration_Guide.pdf
    651.8 KB · Views: 108
Top