Correct method for converting disks to mirrors in TruenasSCALE

Spoon

Dabbler
Joined
May 24, 2015
Messages
25
I’ve changed the pool setup on my Truenas Scale box from Spinning to Flash. The main pool will consist of 2x Optane 900p 280GB mirrored for Metadata and 6x Samsung QVO 8TB configured as 3 mirrors. Given the cost of the drives I’ve started out with single disks in each of the vdevs with the intention of converting them to full mirrors as the system matures.

In preparation of this I wanted to make sure that I have the correct method for attaching drives in SCALE and converting them to mirrors. I’ve created a test setup in VMware workstation to test the process out and have the following method. It appears to work however I get a funny result at the end with I’m unsure about.

Method:

First I establish the disk names in the pool using zpool status and get the following:

Code:
truenas# zpool status                                                 
  pool: DATA
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        DATA                                    ONLINE       0     0     0
          f158728d-26dd-436f-bb4b-254d70ec1140  ONLINE       0     0     0
        special
          64bf4f72-d834-4c21-8361-315908bb0ade  ONLINE       0     0     0

errors: No known data errors


Using the disk names I run the following commands to attach the disks to each vdev and convert them into mirrors:

Code:
zpool attach DATA f158728d-26dd-436f-bb4b-254d70ec1140 sdc

zpool attach DATA 64bf4f72-d834-4c21-8361-315908bb0ade nvme0n2


When I run zpool status again I get the following:

Code:
truenas# zpool status                                                 
  pool: DATA
 state: ONLINE
  scan: resilvered 8.22M in 00:00:01 with 0 errors on Sun May 16 08:41:02 2021
config:

        NAME                                      STATE     READ WRITE CKSUM
        DATA                                      ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            f158728d-26dd-436f-bb4b-254d70ec1140  ONLINE       0     0     0
            sdc                                   ONLINE       0     0     0
        special
          mirror-1                                ONLINE       0     0     0
            64bf4f72-d834-4c21-8361-315908bb0ade  ONLINE       0     0     0
            nvme0n2                               ONLINE       0     0     0

errors: No known data errors


When looking in Pool Status I can see two disks under each vdev and they are described as mirrors, however they have /dev/ before them in the description. Have a screwed something up here?
InkedPool-After_attachment_LI.jpg


System as follows:
Code:
OS:    TrueNAS-SCALE-21.04-ALPHA.1

HPE ML30 Gen10
CPU:        Intel(R) Xeon(R) CPU E-2224
MEM:      64GB (2x32GB ECC)
OS Drive: 1x Intel® Optane 800p 118Gb
Storage:   3x 8TB Samsung QVO 8TB (3x Single)
Storage:   1x Intel® P4600 1.6TB
META:     2x Intel® Optane 900p 280GB
NET:        Mellanox CX5 25GbE, 2 ports (10GB fibre link, 25GbE DAC to workstation)
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Technically seems to be working fine to me.
Remember: The UI is a fancy layer on top of CLI utilities, most likely it's a UI fluke.

Feel free to submit a small bug report about the UI displaying manually added disks differently from GUI added disks, thats not how it should've been displayed. But technically it should be fine under the hood.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
You'll want to detach those as the reference isn't right and the swap/data partitions are missing This should be possible from the 3 dots menu to the right of the disk in the status view you posted.

Unfortunately, it seems like the devs haven't completed the functionality there to allow for attach (at least not in 21.4... maybe in the nightlies, but I haven't checked those in a while).

If it's not there, you can still do the partitioning manually:
gdisk /dev/sdb
o (and agree to erase and start with a new GPT)
n (take default partition number 1, start sector and 2G for the finishing sector, Type 8200 ... Linux Swap)
n (take default partition number 2, start sector and end sectors, Type BF01... Solaris /usr & Mac ZFS)
p to show the partition table and make sure it's right
w to write it to disk and exit

Now you should be able to attach sdb2 (rather than sdb)... I don't think you need to use /dev/ (haven't got the right test environment to confirm that for you right now.
 

Spoon

Dabbler
Joined
May 24, 2015
Messages
25
You'll want to detach those as the reference isn't right and the swap/data partitions are missing This should be possible from the 3 dots menu to the right of the disk in the status view you posted.

Unfortunately, it seems like the devs haven't completed the functionality there to allow for attach (at least not in 21.4... maybe in the nightlies, but I haven't checked those in a while).

If it's not there, you can still do the partitioning manually:
gdisk /dev/sdb
o (and agree to erase and start with a new GPT)
n (take default partition number 1, start sector and 2G for the finishing sector, Type 8200 ... Linux Swap)
n (take default partition number 2, start sector and end sectors, Type BF01... Solaris /usr & Mac ZFS)
p to show the partition table and make sure it's right
w to write it to disk and exit

Now you should be able to attach sdb2 (rather than sdb)... I don't think you need to use /dev/ (haven't got the right test environment to confirm that for you right now.

Hi sretalla,

Thanks for the advice. This seems to have done the trick.

For future reference the methodology employed was as follows:

1. Create partitions using gdisk. I replicated the startend sectors exactly as per the drive I wanted mirroring:

Code:
gdisk /dev/sdb
o (and agree to erase and start with a new GPT)
n (take default partition number 1, start sector and 2G for the finishing sector, Type 8200 ... Linux Swap)
n (take default partition number 2, start sector and end sectors, Type BF01... Solaris /usr & Mac ZFS)
p to show the partition table and make sure it's right
w to write it to disk and exit


2. Find the partitions UUID using blkid:

Code:
ruenas#  sudo blkid
/dev/nvme0n1p1: LABEL="DATA" UUID="682517618275423259" UUID_SUB="2218101330623299960" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="64bf4f72-d834-4c21-8361-315908bb0ade"
/dev/nvme0n2p1: LABEL="DATA" UUID="682517618275423259" UUID_SUB="1750548502843256491" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-dd00374743de2ef2" PARTUUID="4187f276-6930-a34d-b3f0-64ebc4a65ea7"
/dev/sr0: BLOCK_SIZE="2048" UUID="2021-04-22-20-04-07-00" LABEL="ISOIMAGE" TYPE="iso9660" PTTYPE="PMBR"
/dev/sda2: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="B1E6-4757" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="3a822184-5304-495a-b8fc-d0f33758697b"
/dev/sda3: LABEL="boot-pool" UUID="14558994541187850293" UUID_SUB="2067847344330453115" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="ff1bc6b6-56ad-4f12-9394-82b6e2365f81"
/dev/sdc1: LABEL="DATA" UUID="682517618275423259" UUID_SUB="1011053494450704972" BLOCK_SIZE="4096" TYPE="zfs_member" PARTLABEL="zfs-8aee78873273eccd" PARTUUID="424ea619-e380-7641-aaa2-b90771237178"
/dev/sdb2: LABEL="DATA" UUID="682517618275423259" UUID_SUB="12245327246652459559" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="f158728d-26dd-436f-bb4b-254d70ec1140"
/dev/nvme0n2p9: PARTUUID="f506097f-884a-eb46-93e1-d478a71f56a2"
/dev/sda1: PARTUUID="2de32e71-698a-4f44-8723-5e84a7488a7d"
/dev/sdc9: PARTUUID="18a1a27d-1051-e949-a5f7-3a91f48907f7"
/dev/sdb1: PARTUUID="caf31354-04a7-4eac-86ac-474e351b6706"
/dev/mapper/sdb1: UUID="55f3d2eb-5d8d-4db8-b7c4-dc955f112726" TYPE="swap"


3. Attached the drive using the zpool attach command:

Code:
zpool attach DATA f158728d-26dd-436f-bb4b-254d70ec1140 424ea619-e380-7641-aaa2-b90771237178


4. Ran zpool status to confirm:

Code:
truenas# zpool status                                                                               
  pool: DATA
 state: ONLINE
  scan: resilvered 6.84M in 00:00:02 with 0 errors on Tue May 18 06:13:29 2021
config:

        NAME                                      STATE     READ WRITE CKSUM
        DATA                                      ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            f158728d-26dd-436f-bb4b-254d70ec1140  ONLINE       0     0     0
            424ea619-e380-7641-aaa2-b90771237178  ONLINE       0     0     0
        special
          64bf4f72-d834-4c21-8361-315908bb0ade    ONLINE       0     0     0

errors: No known data errors


The Mirror is showing as expected under the GUI in Pool status

Correctly Mirrored.png
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
You'll want to detach those as the reference isn't right and the swap/data partitions are missing This should be possible from the 3 dots menu to the right of the disk in the status view you posted.
Thats nonsense, TrueNAS can also work just fine without said partitioning. Never has been an issue, never will. It supports stock created ZFS pools just fine.
the swap shouldn't ever be used anyway, ARC should be nuked aged before swap is used and you should already have swap on your boot ssd and not on your HDD pool) no idea why they keep including it though.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Thats nonsense, TrueNAS can also work just fine without said partitioning. Never has been an issue, never will. It supports stock created ZFS pools just fine.
The complaint wasn't that it wasn't working, rather that the display was "funny".

I am well aware that it works without partitioning, but my response was to the question asked by the OP which I interpreted as, "How do I make it look 'normal'?".

I have recently seen several cases covering concerns that incorrect reference to drives in pool topology can have an impact if drives are moved around (although I'm actually skeptical of those stories, they seemed to be coming from folks I would normally trust).

the swap shouldn't ever be used anyway, ARC should be nuked aged before swap is used and you should already have swap on your boot ssd and not on your HDD pool) no idea why they keep including it though.
Agreed (some cases have been seen on CORE where it is invoked with middleware running away with resources... no idea how much worse it would be with no swap available in those cases).
No idea why it's there when swap is created on boot under SCALE. (other than perhaps consistency with CORE, which doesn't have swap on the boot pool/disk)
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I have recently seen several cases covering concerns that incorrect reference to drives in pool topology can have an impact if drives are moved around (although I'm actually skeptical of those stories, they seemed to be coming from folks I would normally trust).
I've seen that happening only when the UUID is changed (for example: Some USB to sata moves of drives). I've never seen ZFS detect drives based on their actual location, thats afaik not even how it's designed.

That being said:
it doesn't have to look funny at all, thats actually a relatively small fix for iX to implement. ;-)
 
Top