Copy 4x4TB HDDs to 4x8TB HDDs RAIDZ1 bit by bit?

rebl

Dabbler
Joined
Apr 22, 2020
Messages
15
My system runs and works just fine, but I need more storage. My (maybe dumb) idea is to replace the 4TB with 8TB HDDs
Here is the layman question: Is it possible to copy a 4TB Raid Z1 HDD to a 8TB HDD bit by bit to double the storage?
First step: Copy all of the 4TB HDDs to 4 8TB HDDs bit by bit.
Second step: Use the addidional volume to double the storage?
If it would, could I only copy one or two HDDs?
Would this work in theory or is this a dumb idea?

I`m searching right now for the easiest way to enlarge my system in a small and full miniITX-case...
BTW: I have no possibility to copy 16TB on an interim storage, which would be of cause the easiest way.
If you have any other ideas, please let me know.

Edit: I found this documentation to replace disks to grow a pool. So far I thought that all HDDs must be the same size in a RAID system?
So I don`t have to copy the HDD bit by bit, because Freenas will do it also, doesn`t it?
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yes replacing all the disks in a vdev will increase the size after the last disk in resilvered.

All disks in a vdev do not have to be the same size but all disks will only use the equivalent space as the smallest drive.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So far I thought that all HDDs must be the same size in a RAID system?
Not so in ZFS. The capacity of the vdev will be determined by the size of the smallest disk there, but nothing prevents you from having a RAIDZ1 with 2 x 2 TB and 2 x 4 TB disks. It will have a net capacity of ~6 TB.
 

K_switch

Dabbler
Joined
Dec 18, 2019
Messages
44
@rebl
I do not pretend to be an expert by any means... but lets just say that after reading this, It really helped me understand how ZFS handles imbalanced Vdevs and adding additional drives.

Let me know if that helped clear anything up!
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
replace disks to grow a pool

That's it right there. Replace one disk at a time. Wait for resilver to complete, replace the next one. Once the last disk has been completed, you have double the capacity.

All disks in a raidz will use space equivalent to the smallest disk in the raidz. As long as autoexpand is enabled - true for all pools created by FreeNAS - then once all disks in the raidz are of the new size, the vdev (and thus pool) will use the new disk size.
 

K_switch

Dabbler
Joined
Dec 18, 2019
Messages
44
Replace one disk at a time
Not to hijack the thread but I had a situation where some drives had either failed or failing in a 8 disk pool with mirrored set vdevs and when I went to replace the failed drive the other disk in the vdev started reporting quite a few chksum errors and wasnt allowing the silvering to complete... After speaking with a friend he mentioned that it is always best to leave the drive you are replacing in even if that means degrading the pool by removing a different disk... Is that considered beast practice?

On a side note... I was actually able to manually backup all data on the pool while 3 drives were dead and 2 more were throwing chksum errors.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
that it is always best to leave the drive you are replacing in even if that means degrading the pool by removing a different disk... Is that considered beast practice

Ah, the dreaded universal statement, the "Allsatz": "It is always best". This really depends.

If you are experiencing some read errors on the drive, it can be good to leave the drive in as it will provide additional parity if another drive shows errors during the resilver. This can work really well with raidz1 and a spare, for example. There's a story on these forums about someone who survived multiple disk failures on a raidz1 vdev this way, by bringing in additional spares every time a drive started showing errors.

On the other hand, the read errors can slow down the resilver, and make it more likely that other drives will fail during the resilver, particularly if there are a LOT of read errors on the failed drive.

I'd say: If there are only a few read errors, and in case of single parity, consider leaving the drive in so it can help with parity.
If there are a lot of read errors, or you have double or triple parity already, consider taking the drive out so resilver finishes more quickly and the risk of another drive failing during resilver is reduced.
 
Top