Unable to re-add a removed drive from mirrored pool

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
So I was replacing 1 of my 2 mirrored drives in a pool. Through the GUI I set the drive to offline then physically replaced the drive with another of the same size. However, back at the GUI, instead of selecting "Replace" on the drive's drop-down menu, I selected "Online." After doing that, instead of seeing "da0p2" for the name, it showed the GUID. I was unable to use the "offline" or "replace" options after that. This may not have been the best idea, but I ended up removing the drive from the pool. Now all I've got is the one drive.

I tried following some guides to re-add the drive to the pool.

However, I get this error:
Code:
# zpool attach storage /dev/gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc /dev/gptid/b16e8d1d-1389-11eb-b748-6805ca4284fc
cannot attach /dev/gptid/b16e8d1d-1389-11eb-b748-6805ca4284fc to /dev/gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc: can only attach to mirrors and top-level disks


What am I doing wrong?
 
Last edited:

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
Just wanted to show the current output of a few commands:

Code:
# zpool status storage
  pool: storage
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0B in 03:02:13 with 0 errors on Thu Oct 22 16:36:35 2020
config:

        NAME                                          STATE     READ WRITE CKSUM
        storage                                       ONLINE       0     0     0
          gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc  ONLINE       0     0     0

errors: No known data errors


Code:
# glabel status
                                      Name  Status  Components
gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc     N/A  da1p2
gptid/25e54ffc-2fef-11e9-9b9f-001a4b44b638     N/A  da2p2
gptid/b22f3939-39f4-11ea-96c3-6805ca4284fc     N/A  da3p1
gptid/b31d6d61-39f4-11ea-96c3-6805ca4284fc     N/A  da4p1
gptid/35f9a725-14d1-11eb-b748-6805ca4284fc     N/A  da0p1
gptid/39a8116a-14d1-11eb-b748-6805ca4284fc     N/A  da0p2


Code:
# gpart list
Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 3907029127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,dcc67762-e994-11ea-92bd-6805ca4284fc,0x80,0x400000)
   rawuuid: dcc67762-e994-11ea-92bd-6805ca4284fc
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,dcdb9427-e994-11ea-92bd-6805ca4284fc,0x400080,0xe8a08808)
   rawuuid: dcdb9427-e994-11ea-92bd-6805ca4284fc
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5

...

Geom name: da0
modified: false
state: CORRUPT
fwheads: 255
fwsectors: 63
last: 3907029127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,35f9a725-14d1-11eb-b748-6805ca4284fc,0x80,0x400000)
   rawuuid: 35f9a725-14d1-11eb-b748-6805ca4284fc
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da0p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,39a8116a-14d1-11eb-b748-6805ca4284fc,0x400080,0xe8a08808)
   rawuuid: 39a8116a-14d1-11eb-b748-6805ca4284fc
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432
Consumers:
1. Name: da0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
I've seen other forum posts or blog articles indicate that I need to use the device's name. But no matter what name I try to use, I get the message no such device in pool.

For instance:
Code:
# zpool attach storage /dev/da1p2 gpt/da0_part2
cannot attach gpt/da0_part2 to /dev/da1p2: no such device in pool


I've even tried adding a label to da1p2 and using that. No luck.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
zpool attach storage /dev/da1p2 gpt/da0_part2
For a start, there's no such partition as gpt/da0_part2

If you were going to add it that way, it would be /dev/da0p2 (but don't do that).

What you want is to add it with gptid/39a8116a-14d1-11eb-b748-6805ca4284fc, adding it to gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc

Don't put /dev in front of gptids.

So finally like this:

zpool attach storage gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc gptid/39a8116a-14d1-11eb-b748-6805ca4284fc
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
Thank you for the help.

Sorry, forgot to mention that the last time I tried this, I created the partition with a label:
Code:
root@freenas ~ # gpart destroy -F /dev/da0
da0 destroyed
root@freenas ~ # gpart create -s gpt /dev/da0
da0 created
root@freenas ~ # gpart add -i 1 -b 128 -t freebsd-swap -s 2g /dev/da0
da0p1 added
root@freenas ~ # gpart add -i 2 -t freebsd-zfs -l da0_part2 /dev/da0
da0p2 added
root@freenas ~ # glabel status
                                      Name  Status  Components
gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc     N/A  da1p2
gptid/25e54ffc-2fef-11e9-9b9f-001a4b44b638     N/A  da2p2
gptid/b22f3939-39f4-11ea-96c3-6805ca4284fc     N/A  da3p1
gptid/b31d6d61-39f4-11ea-96c3-6805ca4284fc     N/A  da4p1
gptid/5d8783e8-17b9-11eb-b5fb-6805ca4284fc     N/A  da0p1
                             gpt/da0_part2     N/A  da0p2
gptid/6ef7c5a2-17b9-11eb-b5fb-6805ca4284fc     N/A  da0p2
root@freenas ~ # zpool attach storage gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc gptid/6ef7c5a2-17b9-11eb-b5fb-6805ca4284fc
cannot attach gptid/6ef7c5a2-17b9-11eb-b5fb-6805ca4284fc to gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc: can only attach to mirrors and top-level disks


And what's weird is that last time, the glabel status command didn't show the gptid of that partition. And even stranger is that this time, I run it a second time, and now it won't show the label:
Code:
root@freenas ~ # glabel status
                                      Name  Status  Components
gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc     N/A  da1p2
gptid/25e54ffc-2fef-11e9-9b9f-001a4b44b638     N/A  da2p2
gptid/b22f3939-39f4-11ea-96c3-6805ca4284fc     N/A  da3p1
gptid/b31d6d61-39f4-11ea-96c3-6805ca4284fc     N/A  da4p1
gptid/5d8783e8-17b9-11eb-b5fb-6805ca4284fc     N/A  da0p1
gptid/6ef7c5a2-17b9-11eb-b5fb-6805ca4284fc     N/A  da0p2


Regardless, it doesn't seem to think that da1p2 is a top-level disk.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Don't use device names like da1p2 for your zpool commands unless it is the boot-pool you are manipulating. Always refer to disks as e.g gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc. Don't use manual GPT labels, either.

TrueNAS uses the gptid/uuid scheme only internally, so please stick to these.
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
Just for kicks, I tried exporting then re-importing the pool and went through the whole process again. Same results.
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
Just updated to TrueNAS 12.0-U1 and tried this again. Same results.

Code:
# zpool attach storage gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc gptid/14c9214b-42ce-11eb-8518-6805ca4284fc
cannot attach gptid/14c9214b-42ce-11eb-8518-6805ca4284fc to gptid/dcdb9427-e994-11ea-92bd-6805ca4284fc: can only attach to mirrors and top-level disks


I guess it needs to see that pool as a "mirrored" pool before it'll attach. But how do I do that if there's only 1 drive?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
If you want to replace a drive in a mirror.
Offline the old one (or just pull it if you are lazy) and add the new drive to the mirror.
If anything goes wrong with the new drive during inital rebuild, you can put the old one back with a pretty short rebuild stage by hitting online after switchingdrives again.

Simply put:
You create a tripple mirror, where the old drive is offlined. and remove the old drive once the new drive is completely rebuild.
Preferably you could even skip removing the old drive and just add the new drive and make it a tripple mirror (using a spare sata or even USB slot) and replace the old drive after the new drive has all the content (because that has a zero chance of failure.)
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
If you want to replace a drive in a mirror.
Offline the old one (or just pull it if you are lazy) and add the new drive to the mirror.
If anything goes wrong with the new drive during inital rebuild, you can put the old one back with a pretty short rebuild stage by hitting online after switchingdrives again.

That doesn't really apply here anymore. From my original post:
So I was replacing 1 of my 2 mirrored drives in a pool. Through the GUI I set the drive to offline then physically replaced the drive with another of the same size. However, back at the GUI, instead of selecting "Replace" on the drive's drop-down menu, I selected "Online." After doing that, instead of seeing "da0p2" for the name, it showed the GUID. I was unable to use the "offline" or "replace" options after that. This may not have been the best idea, but I ended up removing the drive from the pool. Now all I've got is the one drive.

So now I've got a pool with a single drive that's not mirrored at all.

I'm now considering getting a third drive having a striped pool.
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
I tried to just replace the existing drive (da1) with the new one (da0) to see if maybe there's an issue with the existing drive. Using the GUI I went to the pool status, clicked the gear icon for "da1p2", selected "Replace" and chose da0 as the member disk. As soon as it got done wiping, it gave an error:

Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 277, in replace
    target.replace(newvdev)
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 277, in replace
    target.replace(newvdev)
  File "libzfs.pyx", line 2060, in libzfs.ZFSVdev.replace
libzfs.ZFSException: already in replacing/spare config; wait for completion or use 'zpool detach'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 91, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 279, in replace
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BADTARGET] already in replacing/spare config; wait for completion or use 'zpool detach'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 397, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/replace_disk.py", line 122, in replace
    raise e
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/replace_disk.py", line 102, in replace
    await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_BADTARGET] already in replacing/spare config; wait for completion or use 'zpool detach'


Same results when using the "Force" checkbox.
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
I think I'm going to just add a third hard drive & make it a striped pool. I didn't do that before because this desktop only has 2 hard drive bays. I'll need to get a mounting bracket to put the third in the 5.25" optical drive bay AND a SATA power splitter cable.
 

amlamarra

Explorer
Joined
Feb 24, 2017
Messages
51
This is now resolved. What I ended up doing is add another drive to the system, make it into a separate pool, replicate everything over, destroy the old pool, recreate it with the same name (as a mirrored pool), and replicate everything back to it.
 
Top