SOLVED Change stripe vdev layout to mirror

AamirA

Dabbler
Joined
Sep 5, 2023
Messages
12
I know there are a few posts about doing this but for some reason, I don't know if it is because they are old, I can't follow them. For example this post talks about a pool status page that I simply can't find. I don't know if I'm being dumb but could someone help me find it or it's replacement. I have a single stripe pool with one drive which I would like to mirror to a new drive that is the same size. I also would like the data to stay but its not that important. How do I do this?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi @AamirA

The previous threads may have referred to TrueNAS CORE, whereas you've posted in the SCALE forum, so I assume you're using SCALE which has a slightly different UI.

You should be able to do this through the GUI, by going to the Storage page and selecting the Manage Devices option underneath your pool name.

From the new window, select your single drive, and then click the Extend button and select your second, empty drive. Confirm the attachment and it should automatically resilver to a mirror.
 

AamirA

Dabbler
Joined
Sep 5, 2023
Messages
12
Hi @AamirA

The previous threads may have referred to TrueNAS CORE, whereas you've posted in the SCALE forum, so I assume you're using SCALE which has a slightly different UI.

You should be able to do this through the GUI, by going to the Storage page and selecting the Manage Devices option underneath your pool name.

From the new window, select your single drive, and then click the Extend button and select your second, empty drive. Confirm the attachment and it should automatically resilver to a mirror.
I see. I saw that button but didn't know if it would make it a mirror. Thanks!
 

oms

Cadet
Joined
Jan 25, 2024
Messages
4
I recently built my first TrueNAS scale system and ran into this problem.

After confirming going from stripe to mirror was possible I went ahead and built my system with two storage pools. An "main" pool with 2 x 8TB drives for bulk storage and a "fast" pool with SSD's.

At the time of building the "fast" pool I only had access to a single 750GB SSD. Knowing that I could extend the pool later, I configured it as a stripe. I finally got around to recovering the second SSD and installed hoping to extend the pool to have a second mirror disk.

The problem is that the second disk is an old 512GB SSD which is smaller than my 750GB and when I try to extend I get an "device too small" error (see below this for the full error message).

I already have plex, home-assistant and a couple of other apps that I am using and would prefer not having to rebuild the whole pool and reinstall the apps.

Is there any chance I can move from a stripe to a mirror? I dont mind losing 250GB on the first SSD and I understand I'll only have 512GB available.

Thanks in advance

[EZFS_BADDEV] cannot attach /dev/disk/by-partuuid/07db271d-f4f3-4998-8df4-0042d923f49f to /dev/disk/by-partuuid/6b3c643f-3520-4a71-a7bb-beaf9359355b: device is too small
 

oms

Cadet
Joined
Jan 25, 2024
Messages
4
Some more info (

m trying to add the /sdc disk to be mirrored with the /sdb disk in the "fast" pool

admin@truenas[~]$ sudo ls -l /dev/disk/by-id
total 0
lrwxrwxrwx 1 root root 9 Jan 25 20:12 ata-Crucial_CT750MX300SSD1_162312EC3F72 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 25 20:12 ata-Crucial_CT750MX300SSD1_162312EC3F72-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Jan 25 20:12 ata-HL-DT-ST_DVD-RAM_GH40L_924CF011305 -> ../../sr0
lrwxrwxrwx 1 root root 9 Jan 25 20:12 ata-ST8000VN004-3CP101_WWZ2G7D7 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 25 20:12 ata-ST8000VN004-3CP101_WWZ2G7D7-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 Jan 25 20:12 ata-ST8000VN004-3CP101_WWZ2G8PF -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 25 20:12 ata-ST8000VN004-3CP101_WWZ2G8PF-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 9 Jan 25 20:56 ata-Samsung_SSD_850_PRO_512GB_S250NXAG710886M -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 25 20:56 ata-Samsung_SSD_850_PRO_512GB_S250NXAG710886M-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jan 25 20:12 dm-name-md127 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Jan 25 20:12 dm-uuid-CRYPT-PLAIN-md127 -> ../../dm-0
lrwxrwxrwx 1 root root 11 Jan 25 20:12 md-name-swap0 -> ../../md127
lrwxrwxrwx 1 root root 11 Jan 25 20:12 md-uuid-9b94d3f7:da50b673:f6f06bf1:3b54814c -> ../../md127
lrwxrwxrwx 1 root root 13 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH87YQH4U -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH87YQH4U-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH87YQH4U-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH87YQH4U-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH87YQH4U-part4 -> ../../nvme1n1p4
lrwxrwxrwx 1 root root 13 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH88TQH4U -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH88TQH4U-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH88TQH4U-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH88TQH4U-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-KBG50ZNV256G_KIOXIA_935PH88TQH4U-part4 -> ../../nvme0n1p4
lrwxrwxrwx 1 root root 13 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd3c -> ../../nvme1n1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd3c-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd3c-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd3c-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd3c-part4 -> ../../nvme1n1p4
lrwxrwxrwx 1 root root 13 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd5b -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd5b-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd5b-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd5b-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Jan 25 20:12 nvme-eui.00000000000000008ce38e040467dd5b-part4 -> ../../nvme0n1p4
lrwxrwxrwx 1 root root 9 Jan 25 20:12 wwn-0x5000c500e73d2103 -> ../../sda
lrwxrwxrwx 1 root root 10 Jan 25 20:12 wwn-0x5000c500e73d2103-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 Jan 25 20:12 wwn-0x5000c500e73d2c1e -> ../../sdd
lrwxrwxrwx 1 root root 10 Jan 25 20:12 wwn-0x5000c500e73d2c1e-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 9 Jan 25 20:12 wwn-0x5001480000000000 -> ../../sr0
lrwxrwxrwx 1 root root 9 Jan 25 20:56 wwn-0x5002538840070292 -> ../../sdc
lrwxrwxrwx 1 root root 10 Jan 25 20:56 wwn-0x5002538840070292-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 9 Jan 25 20:12 wwn-0x500a075112ec3f72 -> ../../sdb
lrwxrwxrwx 1 root root 10 Jan 25 20:12 wwn-0x500a075112ec3f72-part1 -> ../../sdb1


admin@truenas[~]$ sudo zpool status
[sudo] password for admin:
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:03 with 0 errors on Thu Jan 25 03:45:04 2024
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
nvme1n1p3 ONLINE 0 0 0

errors: No known data errors

pool: fast
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
fast ONLINE 0 0 0
6b3c643f-3520-4a71-a7bb-beaf9359355b ONLINE 0 0 0

errors: No known data errors

pool: main
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
main ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
fb7acbd1-a0d5-409a-81cb-fe4cf58d38d0 ONLINE 0 0 0
00131c78-ab20-48fb-9c45-25f959b07550 ONLINE 0 0 0

errors: No known data errors
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Is there any chance I can move from a stripe to a mirror? I dont mind losing 250GB on the first SSD and I understand I'll only have 512GB available.

Hi @oms

Unfortunately you can't attach a smaller device to mirror a larger one - the reverse though is possible.

Assuming your data all fits within that 512G size, you could make a new pool (named fast2, 2fast2furious, whatever you'd prefer) - migrate your applications there from the webUI, and then use the 750G SSD to mirror the 512G one.

Migration requires you to stop all of the applications (in order to ensure data consistency) and will take some time depending on how large their installations were:

Just for the sake of anyone coming here in 2023, TrueNAS Scale will take care of the ix-applications for you in the UI.

1. Navigate to Apps
2. Stop all applications
3. Click the Settings dropdown (second option from the upper right) and select "Choose Pool"
4. Select a new pool from the dropdown list, and hit "Choose"
5. TrueNAS will prompt you to confirm if you would like to migrate the data. If you chose to, it will now replicate the dataset and update the app settings automatically.
 

oms

Cadet
Joined
Jan 25, 2024
Messages
4
Thanks for your reply. I ended up completely wiping out the drives and started fresh.
 

oms

Cadet
Joined
Jan 25, 2024
Messages
4
Thanks for your reply. I ended up completely wiping out the drives and started fresh.
Hi @oms

Unfortunately you can't attach a smaller device to mirror a larger one - the reverse though is possible.

Assuming your data all fits within that 512G size, you could make a new pool (named fast2, 2fast2furious, whatever you'd prefer) - migrate your applications there from the webUI, and then use the 750G SSD to mirror the 512G one.

Migration requires you to stop all of the applications (in order to ensure data consistency) and will take some time depending on how large their installations were:
The GUI now shows a "mixed drives" error on the storage page.

Apart from the obvious capacity implications, are there any actual real issues with doing this?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The GUI now shows a "mixed drives" error on the storage page.

Apart from the obvious capacity implications, are there any actual real issues with doing this?
Other than that messaging in the status screen, no. If you ever do detach the smaller drive though, ZFS may expand to fill the 750G drive again, so if the 512G drive fails, leave the device in a faulted/absent state until you can replace it.
 
Top