SOLVED Is my mirrored boot-pool correctly set up?

tomk3003

Cadet
Joined
Aug 5, 2023
Messages
7
After a failure I had to replace one of the SSDs in my boot-pool.
Since the replacement via GUI did not work I had to do it via the command line with
Code:
zpool replace -o ashift=12 <failed_id> /dev/sdf

I know the faulted disk had two partitions while the new one does not seem to be partitioned at all.
Here the parted output:
Code:
(parted) select /dev/sde                                               
Using /dev/sde
(parted) print                                                         
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sde: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
 1      20.5kB  545kB  524kB                     bios_grub
 2      545kB   128GB  128GB  zfs

(parted) select /dev/sdf
Using /dev/sdf
(parted) print                                                         
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sdf: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  128GB  128GB  zfs


The boot pool and TrueNAS seem to be fine, but I am worried that I did something wrong.
I had SSD failures before but always replaced them via the GUI.

(TrueNAS-SCALE-22.12.3.3)
 
Last edited by a moderator:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Afaik you can't touch the boot pool from the GUI, and the correct procedure is to save your config and perform a reinstall on the sane drives, then import the config.
 
Joined
Oct 22, 2019
Messages
3,641
Since the replacement via GUI did not work
Can you elaborate?


Afaik you can't touch the boot pool from the GUI,
You can check, manage, expand, attach, and replace drives in the boot-pool via System Settings -> Boot


The boot pool and TrueNAS seem to be fine, but I am worried that I did something wrong.
Using the GUI does more than just attach/replace a drive to your boot-pool's mirror vdev. It partitions it with ESP (duplicated) + swap + OS (ZFS mirror).

* A swap partition only exists if you let the installer create one during the installation process.
 

tomk3003

Cadet
Joined
Aug 5, 2023
Messages
7
Can you elaborate?
Sorry, I did not write it down. I got the same error on the command line for a plain zpool replace.
Googling around I found a tip for using the ashift parameter and this did the trick.

You can check, manage, expand, attach, and replace drives in the boot-pool via System Settings -> Boot
Yes, I have done this in the past, but this time replace via gui threw an error and I thought doing it on the command line would be equivalent.

Using the GUI does more than just attach/replace a drive to your boot-pool's mirror vdev. It partitions it with ESP (duplicated) + swap + OS (ZFS mirror).
Ah, this explains a lot.
In this case, I could detach the new disk clear it and try it again via GUI and report the error if it pops up again.
 

tomk3003

Cadet
Joined
Aug 5, 2023
Messages
7
Here the result so far:

detach/clear:
Code:
root@truenas[~]# zpool detach boot-pool /dev/sdf
root@truenas[~]# parted
(parted) select /dev/sdf                                                 
Using /dev/sdf
(parted) print                                                           
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sdf: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  128GB  128GB  zfs

(parted) rm 1                                                             
(parted) print                                                           
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sdf: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End  Size  File system  Flags

(parted) mktable gpt                                                     
Warning: The existing disk label on /dev/sdf will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes                                                               
(parted) print                                                           
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sdf: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

(parted) quit
Information: You may need to update /etc/fstab.


After that I tried to attach the disk to the bot-pool and got:
Code:
 FAILED
[EZFS_BADTARGET] cannot attach /dev/sdf2 to /dev/sde2: can only attach to mirrors and top-level disks
More info...
 Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 261, in extend
    i['target'].attach(newvdev)
  File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 261, in extend
    i['target'].attach(newvdev)
  File "libzfs.pyx", line 2208, in libzfs.ZFSVdev.attach
libzfs.ZFSException: cannot attach /dev/sdf2 to /dev/sde2: can only attach to mirrors and top-level disks

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1382, in nf
    return func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 264, in extend
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BADTARGET] cannot attach /dev/sdf2 to /dev/sde2: can only attach to mirrors and top-level disks
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 428, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 463, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1378, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1246, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/boot.py", line 117, in attach
    await job.wrap(extend_pool_job)
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 595, in wrap
    return await subjob.wait(raise_error=True)
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 376, in wait
    raise self.exc_info[1]
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 428, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 458, in __run_body
    rv = await self.middleware._call_worker(self.method_name, *self.args, job={'id': self.id})
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1358, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1273, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1258, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_BADTARGET] cannot attach /dev/sdf2 to /dev/sde2: can only attach to mirrors and top-level disks


Which seems very strange, but is somewhat consistent as I get the ... menu with the replace/attach actions only on the existing disk and not on the pool itself. So I used it from there. I would have expected to have the attach action on the pool and the replace action on the disk.
Screenshot from 2023-08-06 13-04-09.png

Rebooting does not change anything.
The partition table looks ok now though:
Code:
root@truenas[~]# parted
GNU Parted 3.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdf                                                 
Using /dev/sdf
(parted) print                                                           
Model: ATA Patriot P220 128 (scsi)
Disk /dev/sdf: 128GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
 1      20.5kB  545kB  524kB                     bios_grub
 2      545kB   128GB  128GB  zfs


Not sure what to do now. The GUI looks strange to me.
 

tomk3003

Cadet
Joined
Aug 5, 2023
Messages
7
After cloning the partition table and the boot partition I was able to attach the second disk but only by using:
Code:
 
zpool attach -f -o ashift=9 boot-pool sde2 sdf2

Resilvering finished and everything looks as it should in the GUI.

I don't understand the necessity of the ashift parameter.
I never played around with this before and have done everything disk related via the GUI.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Morning @tomk3003

There's presently a bug upstream in ZFS that causes the zpool attach command to not ignore pool ashift (as it should) during attach, which is why it needed to be specified manually. A fix should already be in for this - I can't find the GitHub pull requests at the moment though.

Glad to hear things are working for you now.
 

tomk3003

Cadet
Joined
Aug 5, 2023
Messages
7
Thanks for the info. As I am not allowed to mark the thread as solved, could you do that please.
 
Top