Upgrading my Hard Drives - Best way to transfer existing datasets?

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
To summarize: I have one zpool with a simple raidz1 vdev made up of 3x leftover 3TB WD Red drives I had. I have only 217GB of storage on it so far.

I recently bought 4x 12TB Seagate Ironwolf drives off Amazon (there was a pricing error on Amazon Australia and it was too good to turn down). They haven't arrived yet and I haven't decided what they will be used for, but I'm thinking of putting them in my TrueNAS machine. I do not want to create a new vdev out of them in my current zpool; I would rather remove the old 3TB WD Reds so I can save a bit on power while the server is running, and to provide more room for the new drives in the case.

Taking that into account, what is the best way to migrate the data on my existing dataset to the new zpool I'd create for my new 4x 12TB hard drives? Keep in mind I don't think I can have every single drive in my server at once.
I could probably create a makeshift zfs server out of some spare hardware I have, but obviously I'd prefer less hassle.

I was thinking maybe I could just move all the existing data to an external hard drive via SMB or rsync?(it's only 217GB so far). Then take the old (3TB) drives out, put the new (12TB) drives in, recreate/configure all my datasets on the new drives, then finally move all the data from external drive to newly created datasets over SMB/rsync. Or is there a simpler way?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I was thinking maybe I could just move all the existing data to an external hard drive via SMB or rsync?(it's only 217GB so far). Then take the old (3TB) drives out, put the new (12TB) drives in, recreate/configure all my datasets on the new drives, then finally move all the data from external drive to newly created datasets over SMB/rsync. Or is there a simpler way?
That should be the simplest way for your situation.
  1. Create a new pool from USB device.
  2. Rsync the 3TB pool with the USB one.
  3. Export the 3TB pool.
  4. Create the 12TB pool.
  5. Rsync the USB pool with the 12TB one.
  6. Nuke the USB pool.
  7. Scrub the 12TB pool.
  8. If everything is OK, wipe the 3TB drives.
Also, do note that 4x 12TB drives in Z1 is risky with 1e-14 URE disks.
 
Last edited:

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
That should be the simplest way for your situation.
  1. Create a new pool from USB device.
  2. Rsync the 3TB pool with the USB one.
  3. Export the 3TB pool.
  4. Create the 12TB pool.
  5. Rsync the USB pool with the 12TB one.
  6. Nuke the USB pool.
  7. Scrub the 12TB pool.
  8. If everything is OK, wipe the 3TB drives.
Also, do note that 4x 12TB drives in Z1 is risky with 1e-14 URE disks.
Do consumer disks really have that high of a failure rate during rebuilds? Like, 85% for four 8TB disks?

Makes me wonder why anyone would run RAID5/RaidZ1 at all.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The rebuilds part of it is not super meaningful, it's just a critical period if the redundancy is gone.
Do consumer disks really have that high of a failure rate during rebuilds? Like, 85% for four 8TB disks?
No, according to the calculations presented, which are themselves based on a very digested single number, 0.15%. Not a crazy number, intuitively.
Where this really breaks down is at the 10^-14 error rate. For a change, it's on the pessimistic side of things for an otherwise functional disk. I haven't double-checked the math, but the numbers sort of line up - but they're not super realistic and far too vague to be of meaningful use:
  • How long after the writes is the error rate measured?
  • How many errors happen during a write and so would be caught by ZFS rather quickly?
  • Under what conditions were the disks stored? Under what conditions do they operate for the test?
  • How are errors distributed? It seems likely that they'll show up in clusters
Here's the thing though, let's say the risk is merely 1% - that's a whole lot if you're storing meaningful data. RAIDZ1 and RAID5 are not great because of the limited redundancy, but there's also a lot of analysis floating around based on dodgy numbers.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Makes me wonder why anyone would run RAID5/RaidZ1 at all.
It is in fact generally not suggested to use RAIDZ1 for disks with capacity greater than 2TB.
Most SATA drives of considerable size are 1^-15 URE nowdays.
 
Last edited:
Top