Can't Replace drive in pool

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
When I go to replace the drive in my pool first off it shows up as this in the POOL:

/dev/gptid/6f19d5d7-c114-11ed-a328-6c3be5c137b0

I see the new drive appear under "disks", I destroyed the parittion, using GPART.

When I go to replace, it's not showing any drives under replace. I tried in OFFLINE/ONLINE. But it won't let me replace it.
 
Joined
Oct 22, 2019
Messages
3,641
I see the new drive appear under "disks", I destroyed the parittion, using GPART.
Why? It's my understanding that using the TrueNAS GUI will automatically prepare and partition the device before adding it into your vdev.


Can you list your zpool status, as well as your drives?
Code:
zpool status -v

Code:
geom disk list


Use preformatted text and "spoiler" tags so it doesn't flood this post.
 
Last edited:

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
Because I literally always do that when I replace a drive. However, I only do it if I get an error, or the drive doesn't appear. This time it's just not appearing at all. But it does show in DISKS.
 

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
Why? It's my understanding that using the TrueNAS GUI will automatically prepare and partition the device before adding it into your vdev.


Can you list your zpool status, as well as your drives?
Code:
zpool status -v

Code:
geom dist list


Use preformatted text and "spoiler" tags so it doesn't flood this post.

How do I activate SSH to go through putty instead of the shell? I activated it, and i connect but it's not accepting my root user and pass
 
Joined
Oct 22, 2019
Messages
3,641
I activated it, and i connect but it's not accepting my root user and pass
By default, root is rejected from connecting via SSH. You have to use another user account.

(This behavior can be changed under Services -> SSH)
 

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
Why? It's my understanding that using the TrueNAS GUI will automatically prepare and partition the device before adding it into your vdev.


Can you list your zpool status, as well as your drives?
Code:
zpool status -v

Code:
geom disk list


Use preformatted text and "spoiler" tags so it doesn't flood this post.

here is the data you requested:

root@freenas[~]# zpool status -v
pool: Main
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub in progress since Mon Mar 13 12:52:09 2023
7.31T scanned at 1.53G/s, 5.30T issued at 1.11G/s, 24.9T total
0B repaired, 21.30% done, 05:00:23 to go
config:

NAME STATE READ WRITE CKSUM
Main DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/acec2b97-f0b9-11eb-831c-6c3be5c137b0 ONLINE 0 0 0
16088812335761967039 OFFLINE 0 0 0 was /dev/gptid/6f19d5d7-c114-11ed-a328-6c3be5c137b0
gptid/94c66f20-f0b9-11eb-831c-6c3be5c137b0 ONLINE 0 0 0
gptid/a58e84aa-8e2a-11ed-a0af-6c3be5c137b0 ONLINE 0 0 0
gptid/4bf2c15b-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
gptid/515cbfc0-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
gptid/56ccabce-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
gptid/5c2ff6fb-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
gptid/045fb6db-6cd0-11ec-b713-6c3be5c137b0 ONLINE 0 0 0
gptid/6708a49b-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
gptid/784b1016-ff90-11ec-92e7-6c3be5c137b0 ONLINE 0 0 0
gptid/71f87592-04e6-11ea-b521-6c3be5c137b0 ONLINE 0 0 0
logs
gptid/51f8ff52-6839-11eb-992f-6c3be5c137b0 ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0

errors: No known data errors


1. Name: nvd0
Mediasize: 58977157120 (55G)
Sectorsize: 512
Mode: r1w1e3
descr: INTEL SSDPEK1W060GA
lunid: 5cd2e4189d280100
ident: PHBT8131003Y064Q
rotationrate: 0
fwsectors: 0
fwheads: 0

Geom name: da10
Providers:
1. Name: da10
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: SEAGATE ST4000NM0023
lunid: 5000c500627bae8b
ident: Z1Z5TXKJ
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da6
Providers:
1. Name: da6
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
descr: ATA Hitachi HUS72404
lunid: 5000cca22bdb3f14
ident: PAHXY2NT
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da11
Providers:
1. Name: da11
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: SEAGATE ST4000NM0023
lunid: 5000c500627b9957
ident: Z1Z5TXX0
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: ada0
Providers:
1. Name: ada0
Mediasize: 120034123776 (112G)
Sectorsize: 512
Mode: r1w1e2
descr: SATA SSD
ident: 19101612001851
rotationrate: 0
fwsectors: 63
fwheads: 16

Geom name: da5
Providers:
1. Name: da5
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
descr: ATA Hitachi HUS72404
lunid: 5000cca24cc0f7ca
ident: PCG2408B
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: ada1
Providers:
1. Name: ada1
Mediasize: 120034123776 (112G)
Sectorsize: 512
Mode: r1w1e2
descr: SATA SSD
ident: 19101612001833
rotationrate: 0
fwsectors: 63
fwheads: 16

Geom name: da0
Providers:
1. Name: da0
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c50065e162b9
ident: Z1Z31KS7
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da8
Providers:
1. Name: da8
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c50065e38d0b
ident: Z1Z32NGC
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da2
Providers:
1. Name: da2
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c50065c4669d
ident: Z1Z2YEFF
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da9
Providers:
1. Name: da9
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c50065aaaa1d
ident: Z1Z2S982
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da3
Providers:
1. Name: da3
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c500661f67ec
ident: Z1Z3B16S
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da1
Providers:
1. Name: da1
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA MB4000GCWDC
lunid: 5000c50065e4028f
ident: Z1Z32MP4
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da7
Providers:
1. Name: da7
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
descr: ATA HGST HUS724040AL
lunid: 5000cca24ccfbc72
ident: PN1334PCH3M45S
rotationrate: 7200
fwsectors: 63
fwheads: 255

Geom name: da4
Providers:
1. Name: da4
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
descr: ATA Hitachi HUS72404
lunid: 5000cca22bca734d
ident: PAGRZNUT
rotationrate: 7200
fwsectors: 63
fwheads: 255

**DA4 is the drive that is the replacement**
 
Joined
Oct 22, 2019
Messages
3,641
For "da4", when listed under the "Disks" page in the GUI, does it by chance (incorrectly) have it associated with a pool? (Any pool name.)

Secondly, did you ever change (one way or another) the default swap partition size for newly created vdevs?

Based on what I see, it could be one of the following culprits:
  • You changed the swap settings, in which the GUI believes it cannot replace the old drive with the new one, since the new drive is not "large enough" to provide a swap partition and raw capacity for the RAIDZ2 member. (Can't say for sure what underlying logic and checks that TrueNAS uses behind the scenes for these situations.)
  • TrueNAS incorrectly believes "da4" is already part of another pool. (Which would be odd, since you said you already formatted the "da4" drive.)
  • Something to do with a controller you're using (out of my wheelhouse).

If you feel comfortable, you can try to replace the disk in the vdev using zpool commands in an SSH session. (I would keep the GUI closed while you do this.) Perhaps using the pure command-line may in fact give you a reason for rejection, or provide a meaningful error message.

EDIT: Use the "gptid identifiers" when replacing a drive manually; not the assigned names, such as "da4p1".

Of course, it is without saying: be very careful and don't rush.

I can't help but notice you're also running a scrub while your pool is in a degraded state and you're in the process of trying to resilver the RAIDZ2 vdev? :tongue:

In my opinion, I'd stop the scrub before continuing. (You can always run a scrub at a later time.)
 
Last edited:

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
For "da4", when listed under the "Disks" page in the GUI, does it by chance (incorrectly) have it associated with a pool? (Any pool name.)

Secondly, did you ever change (one way or another) the default swap partition size for newly created vdevs?

Based on what I see, it could be one of the following culprits:
  • You changed the swap settings, in which the GUI believes it cannot replace the old drive with the new one, since the new drive is not "large enough" to provide a swap partition and raw capacity for the RAIDZ2 member. (Can't say for sure what underlying logic and checks that TrueNAS uses behind the scenes for these situations.)
  • TrueNAS incorrectly believes "da4" is already part of another pool. (Which would be odd, since you said you already formatted the "da4" drive.)
  • Something to do with a controller you're using (out of my wheelhouse).

If you feel comfortable, you can try to replace the disk in the vdev using zpool commands in an SSH session. (I would keep the GUI closed while you do this.) Perhaps using the pure command-line may in fact give you a reason for rejection, or provide a meaningful error message.

EDIT: Use the "gptid identifiers" when replacing a drive manually; not the assigned names, such as "da4".

Of course, it is without saying: be very careful and don't rush.

I can't help but notice you're also running a scrub while your pool is in a degraded state and you're in the process of trying to resilver the RAIDZ2 vdev? :tongue:

In my opinion, I'd stop the scrub before continuing. (You can always run a scrub at a later time.)

It says POOL for DA4 is "N/A". So I don't think that's it./
 
Joined
Oct 22, 2019
Messages
3,641
I can't believe I didn't ask this. What version of TrueNAS Core are you on?
 
Joined
Oct 22, 2019
Messages
3,641
There was a bug in 13.0-RELEASE, but they apparently fixed in.

So then it's either a new bug, or something to do with the controller, or something to do with a swap-partition/space descrepency.

However, using the command-line, you may be able to proceed with the replacement (or view a more meaningful error message.)

(Use the "gptid identifiers" when replacing a drive manually; not the assigned names, such as "da4p1".)

Don't forget to stop the scrub that is in progress. Go slowly. Make sure you have your config backed up. Make sure you have a backup of your data, of course.
 
Joined
Oct 22, 2019
Messages
3,641
To rule something out:
Code:
gpart list
 

nafeasonto

Dabbler
Joined
Mar 20, 2019
Messages
23
1. Name: da7p1
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 65536
Mode: r0w0e0
efimedia: HD(1,GPT,a55539aa-8e2a-11ed-a0af-6c3be5c137b0,0x80,0x400000)
rawuuid: a55539aa-8e2a-11ed-a0af-6c3be5c137b0
rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 2147483648
offset: 65536
type: freebsd-swap
index: 1
end: 4194431
start: 128
2. Name: da7p2
Mediasize: 3998639460352 (3.6T)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 2147549184
Mode: r1w1e2
efimedia: HD(2,GPT,a58e84aa-8e2a-11ed-a0af-6c3be5c137b0,0x400080,0x1d180be08)
rawuuid: a58e84aa-8e2a-11ed-a0af-6c3be5c137b0
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 3998639460352
offset: 2147549184
type: freebsd-zfs
index: 2
end: 7814037127
start: 4194432
Consumers:
1. Name: da7
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Mode: r1w1e3
 
Joined
Oct 22, 2019
Messages
3,641
Where's the rest of the output?
 
Top