Help migrating 4-wide RAIDZ1 to 6-wide RAIDZ2

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
I will first apologize for dragging up yet again, a topic that has been widely covered here. After reading all the forum references I can find, I understand the fundamentals of the process: “Replicate-Destroy-Rebuild-Restore”. But the “devil is in the details” as they say. So. I would appreciate any comments on my proposed workflow to upgrade my data pool.

My TrueNAS (23.10.1.3) install is used to store automated backups from several PCs created with a ROBOCOPY script kicked off by the Windows Task Scheduler each evening. The script simply copies the data folders on the clients to an SMB share on the NAS. This NAS is also a media server using PLEX. I have Wireguard installed for remote access

Currently, this pool (“MAIN”) is made up of 4 X 2Tb WD RED+ drives connected to an LSI 9211-8i as RAIDZ1. I have set up automatic Snapshots nightly, followed by a nightly Replication to an external USB drive pool called BACKUP. The rest of my setup is in my signature but is not likely relevant to this task.

I also have a 6Tb drive connected to the LSI card (as a RAIDZ *stripe” pool), currently unused, but could replicate the Main pool there to facilitate the migration, rather than use the external USB drive. Experience tells me that external consumer-grade USB enclosures are not always reliable. (It’s all I have for now, but will address this in the future)

I have 6 X 4Tb HGST SAS drives and the appropriate cables arriving shortly. I first plan to spend several days running extended “Long” SMART tests on the new drives before committing them to the migration. I know of no other tools to test the drives in the TrueNAS Scale environment (all my experience is in Windows). If there is something better please let me know.

So, first question: can I run the 4 existing SATA drives on one channel of the LSI card and up to 4 of the SAS drives (with appropriate cable) on the other channel? This would allow me to continue using the MAIN pool for nightly backups, while the SMART tests run on the new drives.

Also, at the risk of stating the obvious, I don’t have enough room in the system to connect all 10 drives simultaneously, so my proposed workflow to migrate from my 4-wide RAIDZ1 to a 6-wide RAIDZ2 is:

  1. Create a copy of the system config file using the GUI
  2. I will use the previous night’s Snapshop of BACKUP (which is recursive)
  3. I will use the previous night’s replication of MAIN in the “BACKUP” pool on the external USB drive (created using the “Full File System” option). Alternatively, I could replicate MAIN to the attached 6Tb internal drive connected to the LSI card, but this would take much longer since the 6Tb is currently empty.
  4. Export MAIN, choosing to NOT destroy the data.
  5. Remove and preserve the 4 X 2Tb SATA drives, and install the 6 X 4Tb SAS drives.
  6. Create a new 6-wide RAIDZ2 pool called NEW-MAIN using the new SAS drives
  7. From the CLI replicate BACKUP to the NEW-MAIN pool using the previous night's snapshot
    zpool send -R BACKUP@auto-yyyy-mm-dd_mm-ss | zpool receive receive NEW-MAIN
  8. Also, from the CLI, export the newly populated NEW-MAIN
    zpool export NEW-MAIN
  9. In the CLI, rename and import NEW-MAIN as MAIN
  10. In the CLI, export MAIN
  11. Back in the GUI, under the Storage tab, select “Import Pool”, and import MAIN
  12. Restore the system config file saved at step 1.
If all of this crashes, can I recover quickly by exporting MAIN, reattaching the old SATA drives, and re-importing MAIN.

Restoring the config file at step 12 saved when the system had a 4-wide RAIDZ1 data pool to a system that now has a 6-wide RAIDZ2 data pool seems counterintuitive…. Although I understand the config file contains no information on the drive arrangement.

So… what have I missed? (Besides a lot!)

Sorry for this lengthy post and thanks in advance for your comments
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
but could replicate the Main pool there to facilitate the migration, rather than use the external USB drive. Experience tells me that external consumer-grade USB enclosures are not always reliable. (It’s all I have for now, but will address this in the future)
I agree, I'd also rather use the internal drive over the USB attached one.

  1. Also, from the CLI, export the newly populated NEW-MAIN
    zpool export NEW-MAIN
  2. In the CLI, rename and import NEW-MAIN as MAIN
  3. In the CLI, export MAIN
  4. Back in the GUI, under the Storage tab, select “Import Pool”, and import MAIN
Rename a zfs pool seems like you would export from the GUI. Haven't renamed a pool yet, maybe CLI works as well..

Restoring the config file at step 12 saved when the system had a 4-wide RAIDZ1 data pool to a system that now has a 6-wide RAIDZ2 data pool seems counterintuitive…. Although I understand the config file contains no information on the drive arrangement.
Why would you restore the configuration? Renaming the pool should leave all your settings (replication tasks etc.) intact. At this point you'd have nothing to gain from restoring the old config apart from the fact that it may mess things up because your new pool is not your old pool.

I know of no other tools to test the drives in the TrueNAS Scale environment (all my experience is in Windows). If there is something better please let me know.
This is my go to burn in procedure for new drives and there's also this one. I'd recommend burning in your 6 TB spare drive before migrating.

Make sure you do not have your system dataset on your old pool / move it before migrating. That's not the end of the world but I managed to overlook that one time and it's easier if you can avoid losing your system dataset in the first place.
 
Last edited:

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
Thanks for your reply. I have Badblocks running in the spare 6Tb drive as I write this... so thanks for that tip as well.

The back and restore of the config file came from an often-referenced post from 2016, and the reason behind this was to restore the Shares, rather than having to recreate them. However the post relates to FreeNAS, and I'm using TrueNAS Scale... so it might not be appropriate. I will see how things look after the migration is complete... probably prudent to just recreate the shares.

The boot pool is on a mirrored pair of SSDs and is entirely separate from the MAIN pool. Not touching that at all.

One last question. In the Export dialog there is a check box, "Delete Saved Configurations from TrueNAS". It's checked by default. My failsafe "out" if all collapses during this migration was to (hopefully) replace the original 4 SATA drives, and re-import them to have a functional system while I regroup and reformulate. If this box is checked, does it mean that all references to the original MAIN 4 drive pool will be lost, and I will be unable to restore the original configuration if disaster strikes?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
FWIW,

I would perform a fresh whole pool replication to your single 6TB disk as a zfs pool.

Then you can remove the 4x2TB disks and that provides you the redundancy you no longer have on the 6TB single disk pool.

Then if you have more than seven SAS/SATA ports add the 6 disks and setup a RAIDZ2x6 array... and then replicate again from the 6TB to the new Z2 array.

And now you can re-use your disks after verifying eveyrhting worked.

IF you don't have room for the 6 disks of your Z2 and the 6TB, remove the 6TB, make the Z2, then remove on of the Z2 disks and replace with the 6TB. You will have 1 drive of redundancy, and after you replicate, then you can add back the missing Z2 disk.

Be careful with permissions. When you replicate you have the option to make things read-only. You don't really want to do that.

Also, move your system logs back to the boot disk until you finish the migration.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
[..] I don’t have enough room in the system to connect all 10 drives simultaneously, [..]
Is that so? You have 10 SATA ports on the board and I really wonder why you are using an HBA in the first place. For your setup it does not make sense at all. But now that you have it, just keep the existing pool connected to it, attach the new HDDs to the board's SATA ports and you should be fine. If the case is too small, you can probably keep the drives laying around outside for a while?

Yes, you might need a bigger PSU, but it is by far the easiest setup.
 

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
I’m using an HBA because I’m connecting SAS drives
 

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
Thanks. The HBA has 8 ports so no worries there.
Be careful with permissions. When you replicate you have the option to make things read-only. You don't really want to do that.

Also, move your system logs back to the boot disk until you finish the migration.
Thanks for this.... I currently do have permissions on the Backup replication (at "Destination Dataset Read-only Policy") set to "SET". I understand this should be set to "IGNORE"?

Where are the system logs located? I can copy them to the boot disk, but I need to know where to find them.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thanks. The HBA has 8 ports so no worries there.
I found your mainboard has 6 data ports not 10 like @ChrisRJ suggested, but maybe I looked up the wrong board.
What he meant was, I didn't check when you said you can't connect everything, that you have enough SATA ports with your HBA, to keep your old pool connected when you add your new drives, so you can migrate directly.

Yes, set read only to ignore.

here are the system logs located? I can copy them to the boot disk, but I need to know where to find them.
I'm not sure what @Stux meant, I was talking about the system dataset.
 

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
Ok… now I’m a bit confused. My board does indeed have 10 SATA ports: 2 x SATA 3, 4 x SATA2 as AHCI, and 4 x SATA2 as SCU. Can‘t connect the 6 new SAS drives to the SATA ports, and the 4 SATA drives are already connected to HBA. Unceremoniously disconnecting the existing pool from the HBA, reconnecting them to the 4 AHCI SATA2 ports and expecting them to work without missing a beat did not seem intuitively reasonable. However I know Intel RAID implementations don’t care what order the drives are connected or reconnected to the ports, so maybe TrueNAS/Linux wouldn’t care either.
I think I’ll replicate the MAIN dataset to the spare 6 Gb, connect the SAS drives and replicate the data back.
And to reiterate, I am not touching the BOOT pool at all. Wouldn’t the system logs live there, not on a data pool?

not to seem ungrateful for the help offered, but I was really hoping to get some feedback on my workflow, specifically on the export, rename, import steps. one concern I have is whether or not the system will recognize the physically new dataset as the original dataset ie. with all the links etc. etc still intact.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Can‘t connect the 6 new SAS drives to the SATA ports, and the 4 SATA drives are already connected to HBA. Unceremoniously disconnecting the existing pool from the HBA, reconnecting them to the 4 AHCI SATA2 ports and expecting them to work without missing a beat did not seem intuitively reasonable.
Maybe shutdown the server first (if hot plug is supported even that may not be necessary but personally I'd do it anyway) , but other than that there's really magic to it.

I think I’ll replicate the MAIN dataset to the spare 6 Gb, connect the SAS drives and replicate the data back.
[...]
not to seem ungrateful for the help offered, but I was really hoping to get some feedback on my workflow, specifically on the export, rename, import steps.
Well, this is a major part of your workflow. @ChrisRJ picked up on the SATA ports, which is very good. It saves you the hassle of needing a solution in between for migration, this actually saves you a lot of hassle.
I mean ultimately you need to decide what you want, but directly replicating from your old pool to your new pool is the best option.

And to reiterate, I am not touching the BOOT pool at all. Wouldn’t the system logs live there, not on a data pool?
It's not entirely clear from the link I gave you, however
If the system has one pool, TrueNAS configures that pool as the system dataset pool.
does not mean boot pool. It means the system dataset will live on the first pool created on the machine and this will be on your current main pool. You can check under Datasets / under the Storage Settings [and change it there directly]
1708366904464.png


one concern I have is whether or not the system will recognize the physically new dataset as the original dataset ie. with all the links etc. etc still intact.
I am confident that after renaming it to your original name (and assuming you replicated a 1:1 copy) you will have to do close to no reconfiguration. I could be wrong here but after all it's just path names for SMB shares etc. when they don't change nothing should break. I'd stop all services temporarily in the meantime (SMB, snapshots, replication tasks etc.).
Even if you need to reconfigure things I'd see no way around it. You are not changing your config, you are just replacing a pool and rename it.
 

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
Thank for this… it’s cleared up a lot of my initial misunderstanding. If switching the existing MAIN POOL to the SATA ports won’t ‘roast’ it, then that does seem to be the most efficient way to complete the migration.
Again, thank you, and thanks to others who responded to my query
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm not sure what @Stux meant, I was talking about the system dataset.

yes, that's what I meant. You want to tell TrueNAS to store the system dataset on the bootdrive until you finish the migration. Then you can tell it to store on the migrated pool...

ultimately you need to decide what you want, but directly replicating from your old pool to your new pool is the best option

This is true.

And the below may help

 
Last edited:

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
@Stux: Thanks for your reply. Please forgive my paranoia, but just for greater certainty — I can move the System Dataset, by going to "System Settings>Advanced>Storage>Configure" and choosing the "Boot-Pool" as the destination. This will migrate the System Dataset to the Boot-Pool?

Also, and once again forgive my paranoia — to replicate from the old pool to the new pool directly, I would begin by:
1. Exporting the current Main Pool (preserving the Shares),
2. Disconnecting the Main drives from the HBA
3. Reconnect them to the mainboard's SATA ports
4. Import the Main pool.

I'm clear on the balance of the workflow, (connecting the new SAS drives, etc), but I have this lingering fear that the system won't correctly recognize the Main pool drives in their new configuration connected to the SATA ports. Yes.... I know, "Why wouldn't they be recognized?", but I did say I was paranoid.... and apologized in advance
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I can move the System Dataset, by going to "System Settings>Advanced>Storage>Configure" and choosing the "Boot-Pool" as the destination. This will migrate the System Dataset to the Boot-Pool?
Yes. Usually this works without a hitch, in the worst case you need to reboot afterwards.

Also, and once again forgive my paranoia — to replicate from the old pool to the new pool directly, I would begin by:
1. Exporting the current Main Pool (preserving the Shares),
2. Disconnecting the Main drives from the HBA
3. Reconnect them to the mainboard's SATA ports
4. Import the Main pool.
Afaik the pools are exported during shutdown anyway. Exporting plays more of a role if you want to migrate to another system. I'd just shutdown, swap the sata ports and fire the server up again.

but I have this lingering fear that the system won't correctly recognize the Main pool drives in their new configuration connected to the SATA ports.
I feel your fear of entering unknown waters, but the pool will not care to which ports it's connected.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes. You can move the pools drives and shuffle them up and connect them any which way, as long as they are directly connected and not via some sortof raid controller, so SAS or SATA should be fine.

Regarding the System Dataset, on Core, its Settings -> System Dataset, then chose the target pool from the Configure System Dataset popup.
 

cdog89

Explorer
Joined
Jan 19, 2024
Messages
75
I will first apologize for dragging up yet again, a topic that has been widely covered here. After reading all the forum references I can find, I understand the fundamentals of the process: “Replicate-Destroy-Rebuild-Restore”. But the “devil is in the details” as they say. So. I would appreciate any comments on my proposed workflow to upgrade my data pool.

My TrueNAS (23.10.1.3) install is used to store automated backups from several PCs created with a ROBOCOPY script kicked off by the Windows Task Scheduler each evening. The script simply copies the data folders on the clients to an SMB share on the NAS. This NAS is also a media server using PLEX. I have Wireguard installed for remote access

Currently, this pool (“MAIN”) is made up of 4 X 2Tb WD RED+ drives connected to an LSI 9211-8i as RAIDZ1. I have set up automatic Snapshots nightly, followed by a nightly Replication to an external USB drive pool called BACKUP. The rest of my setup is in my signature but is not likely relevant to this task.

I also have a 6Tb drive connected to the LSI card (as a RAIDZ *stripe” pool), currently unused, but could replicate the Main pool there to facilitate the migration, rather than use the external USB drive. Experience tells me that external consumer-grade USB enclosures are not always reliable. (It’s all I have for now, but will address this in the future)

I have 6 X 4Tb HGST SAS drives and the appropriate cables arriving shortly. I first plan to spend several days running extended “Long” SMART tests on the new drives before committing them to the migration. I know of no other tools to test the drives in the TrueNAS Scale environment (all my experience is in Windows). If there is something better please let me know.

So, first question: can I run the 4 existing SATA drives on one channel of the LSI card and up to 4 of the SAS drives (with appropriate cable) on the other channel? This would allow me to continue using the MAIN pool for nightly backups, while the SMART tests run on the new drives.

Also, at the risk of stating the obvious, I don’t have enough room in the system to connect all 10 drives simultaneously, so my proposed workflow to migrate from my 4-wide RAIDZ1 to a 6-wide RAIDZ2 is:

  1. Create a copy of the system config file using the GUI
  2. I will use the previous night’s Snapshop of BACKUP (which is recursive)
  3. I will use the previous night’s replication of MAIN in the “BACKUP” pool on the external USB drive (created using the “Full File System” option). Alternatively, I could replicate MAIN to the attached 6Tb internal drive connected to the LSI card, but this would take much longer since the 6Tb is currently empty.
  4. Export MAIN, choosing to NOT destroy the data.
  5. Remove and preserve the 4 X 2Tb SATA drives, and install the 6 X 4Tb SAS drives.
  6. Create a new 6-wide RAIDZ2 pool called NEW-MAIN using the new SAS drives
  7. From the CLI replicate BACKUP to the NEW-MAIN pool using the previous night's snapshot
    zpool send -R BACKUP@auto-yyyy-mm-dd_mm-ss | zpool receive receive NEW-MAIN
  8. Also, from the CLI, export the newly populated NEW-MAIN
    zpool export NEW-MAIN
  9. In the CLI, rename and import NEW-MAIN as MAIN
  10. In the CLI, export MAIN
  11. Back in the GUI, under the Storage tab, select “Import Pool”, and import MAIN
  12. Restore the system config file saved at step 1.
If all of this crashes, can I recover quickly by exporting MAIN, reattaching the old SATA drives, and re-importing MAIN.

Restoring the config file at step 12 saved when the system had a 4-wide RAIDZ1 data pool to a system that now has a 6-wide RAIDZ2 data pool seems counterintuitive…. Although I understand the config file contains no information on the drive arrangement.

So… what have I missed? (Besides a lot!)

Sorry for this lengthy post and thanks in advance for your comments

I just did exactly what you're trying to do. If you haven't completed it yet, I can tell you the simple procedure I used to successfully do this.
 

englishm

Dabbler
Joined
Jan 9, 2024
Messages
14
I have actually managed to pull this off — thanks for checking in, @cdog89. As others pointed, out my paranoia was a bit unfounded. The 6 - 4Tb drives took a little longer to arrive, after which they spent several days undergoing multiple rounds of Bad blocks (thanks @chuck32) and SMART long testing, but the actual migration went very smoothly. I ended up connecting the older drives to the board's SATA ports and the new drives to the LSI SAS ports ... zfs send|receive and there you go! Thanks to all who helped — @ChrisRJ, @Stux and @chuck32, it is more appreciated than you likely know.
 
Top