I will first apologize for dragging up yet again, a topic that has been widely covered here. After reading all the forum references I can find, I understand the fundamentals of the process: “Replicate-Destroy-Rebuild-Restore”. But the “devil is in the details” as they say. So. I would appreciate any comments on my proposed workflow to upgrade my data pool.
My TrueNAS (23.10.1.3) install is used to store automated backups from several PCs created with a ROBOCOPY script kicked off by the Windows Task Scheduler each evening. The script simply copies the data folders on the clients to an SMB share on the NAS. This NAS is also a media server using PLEX. I have Wireguard installed for remote access
Currently, this pool (“MAIN”) is made up of 4 X 2Tb WD RED+ drives connected to an LSI 9211-8i as RAIDZ1. I have set up automatic Snapshots nightly, followed by a nightly Replication to an external USB drive pool called BACKUP. The rest of my setup is in my signature but is not likely relevant to this task.
I also have a 6Tb drive connected to the LSI card (as a RAIDZ *stripe” pool), currently unused, but could replicate the Main pool there to facilitate the migration, rather than use the external USB drive. Experience tells me that external consumer-grade USB enclosures are not always reliable. (It’s all I have for now, but will address this in the future)
I have 6 X 4Tb HGST SAS drives and the appropriate cables arriving shortly. I first plan to spend several days running extended “Long” SMART tests on the new drives before committing them to the migration. I know of no other tools to test the drives in the TrueNAS Scale environment (all my experience is in Windows). If there is something better please let me know.
So, first question: can I run the 4 existing SATA drives on one channel of the LSI card and up to 4 of the SAS drives (with appropriate cable) on the other channel? This would allow me to continue using the MAIN pool for nightly backups, while the SMART tests run on the new drives.
Also, at the risk of stating the obvious, I don’t have enough room in the system to connect all 10 drives simultaneously, so my proposed workflow to migrate from my 4-wide RAIDZ1 to a 6-wide RAIDZ2 is:
Restoring the config file at step 12 saved when the system had a 4-wide RAIDZ1 data pool to a system that now has a 6-wide RAIDZ2 data pool seems counterintuitive…. Although I understand the config file contains no information on the drive arrangement.
So… what have I missed? (Besides a lot!)
Sorry for this lengthy post and thanks in advance for your comments
My TrueNAS (23.10.1.3) install is used to store automated backups from several PCs created with a ROBOCOPY script kicked off by the Windows Task Scheduler each evening. The script simply copies the data folders on the clients to an SMB share on the NAS. This NAS is also a media server using PLEX. I have Wireguard installed for remote access
Currently, this pool (“MAIN”) is made up of 4 X 2Tb WD RED+ drives connected to an LSI 9211-8i as RAIDZ1. I have set up automatic Snapshots nightly, followed by a nightly Replication to an external USB drive pool called BACKUP. The rest of my setup is in my signature but is not likely relevant to this task.
I also have a 6Tb drive connected to the LSI card (as a RAIDZ *stripe” pool), currently unused, but could replicate the Main pool there to facilitate the migration, rather than use the external USB drive. Experience tells me that external consumer-grade USB enclosures are not always reliable. (It’s all I have for now, but will address this in the future)
I have 6 X 4Tb HGST SAS drives and the appropriate cables arriving shortly. I first plan to spend several days running extended “Long” SMART tests on the new drives before committing them to the migration. I know of no other tools to test the drives in the TrueNAS Scale environment (all my experience is in Windows). If there is something better please let me know.
So, first question: can I run the 4 existing SATA drives on one channel of the LSI card and up to 4 of the SAS drives (with appropriate cable) on the other channel? This would allow me to continue using the MAIN pool for nightly backups, while the SMART tests run on the new drives.
Also, at the risk of stating the obvious, I don’t have enough room in the system to connect all 10 drives simultaneously, so my proposed workflow to migrate from my 4-wide RAIDZ1 to a 6-wide RAIDZ2 is:
- Create a copy of the system config file using the GUI
- I will use the previous night’s Snapshop of BACKUP (which is recursive)
- I will use the previous night’s replication of MAIN in the “BACKUP” pool on the external USB drive (created using the “Full File System” option). Alternatively, I could replicate MAIN to the attached 6Tb internal drive connected to the LSI card, but this would take much longer since the 6Tb is currently empty.
- Export MAIN, choosing to NOT destroy the data.
- Remove and preserve the 4 X 2Tb SATA drives, and install the 6 X 4Tb SAS drives.
- Create a new 6-wide RAIDZ2 pool called NEW-MAIN using the new SAS drives
- From the CLI replicate BACKUP to the NEW-MAIN pool using the previous night's snapshot
zpool send -R BACKUP@auto-yyyy-mm-dd_mm-ss | zpool receive receive NEW-MAIN - Also, from the CLI, export the newly populated NEW-MAIN
zpool export NEW-MAIN - In the CLI, rename and import NEW-MAIN as MAIN
- In the CLI, export MAIN
- Back in the GUI, under the Storage tab, select “Import Pool”, and import MAIN
- Restore the system config file saved at step 1.
Restoring the config file at step 12 saved when the system had a 4-wide RAIDZ1 data pool to a system that now has a 6-wide RAIDZ2 data pool seems counterintuitive…. Although I understand the config file contains no information on the drive arrangement.
So… what have I missed? (Besides a lot!)
Sorry for this lengthy post and thanks in advance for your comments