Upgrading disks in RAID-Z1 using re-silver - defragementation?

Status
Not open for further replies.

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hi,

I have a HP MicroServer Gen8 running FreeNAS 9.3.

It has four 3 TB HDDs, running in a RAID-Z1 configuration.

I'm running quite low on capacity - it's around 90% filled at the moment, so I'm looking at upgrading the capacity.

I've purchased four 5 TB HDDs.

My first question is, what is the safest way to upgrade to the larger disks? I'm guessing I can just shutdown the server, swap out one drive for another, start it up, have it detect the missing disk, and do a re-silver, disk-by-disk?

Or is there another way I can do it that you'd suggest?

My second question is - is there any way I can also use this process to "defragment" the data on the pool? I know fragmentation isn't normally a problem on ZFS, but I also know that if you run low on free disk-space, it can be an issue.

Will the re-silvering have any effect on fragmentation? Or is there something extra I can do as part of the process, to de-fragment the pool?

Cheers,
Victor
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
My first question is, what is the safest way to upgrade to the larger disks?
http://doc.freenas.org/9.3/freenas_storage.html#replacing-drives-to-grow-a-zfs-pool

Note, though, that there's a not-insignificant probability of an unrecoverable read error when resilvering a RAIDZ1 array using disks of this size. If you don't have room to install an extra drive temporarily, expanding the pool in this way removes your redundancy and could result in some pool corruption.
 
Joined
Oct 2, 2014
Messages
925
http://doc.freenas.org/9.3/freenas_storage.html#replacing-drives-to-grow-a-zfs-pool

Note, though, that there's a not-insignificant probability of an unrecoverable read error when resilvering a RAIDZ1 array using disks of this size. If you don't have room to install an extra drive temporarily, expanding the pool in this way removes your redundancy and could result in some pool corruption.
I am going to second this, and add on you MIGHT want to open the micro up and attach another drive to your server so you dont have to pull one out. Or if you have the ability of having another system that has enough sata ports or has a recommended HBA i would use that to rework the pool and put it in a RAIDz2 config.

i.e. I would attempt to move the drives to a different system with more SATA ports/power connections, make a new RAIDz2 pool with the 5Tb hdds, and then copy the datasets over to the newly created RAIDz2, then after it completes do a scrub, and then move the 5Tb's back to the micro
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hi,

Hmm, I do have access to a second HP MicroServer N54L (slightly older model, but similar).

That currently has four drives in it as well - but I can shut that down, take out those drives, install the new 5TB drives - and maybe just zfs send/receive the data over the network? Would that work well? Or should I be using something like rsync instead? Will zfs send/receive versus rsync have any effect on fragmentation? (Yes, that is probably a secondary concern to getting the expansion done safely, but if I can do it as a side-effect of this, that would be useful).

Also, if I do it that way, do I still want to use RAID-Z2, or will RAID-Z1 work?

Cheers,
Victor
 
Joined
Oct 2, 2014
Messages
925
Hi,

Hmm, I do have access to a second HP MicroServer N54L (slightly older model, but similar).

That currently has four drives in it as well - but I can shut that down, take out those drives, install the new 5TB drives - and maybe just zfs send/receive the data over the network? Would that work well? Or should I be using something like rsync instead? Will zfs send/receive versus rsync have any effect on fragmentation? (Yes, that is probably a secondary concern to getting the expansion done safely, but if I can do it as a side-effect of this, that would be useful).

Also, if I do it that way, do I still want to use RAID-Z2, or will RAID-Z1 work?

Cheers,
Victor
that might take a long time to transfer over a gigabit network... my suggestion is to use a server capable of having all 8 hdds connected at once so that the only limit is the hdds/HBA card. I would want RAIDz2 as @danb35 already pointed out that once you go with large hdds, if one of them dies the possibility of issues during a resliver is increased, at least with RAIDz2 you have a bit more buffer, at the cost of available space; if you keep good onsite backups and dont care about the possibility of loosing data/the whole pool during a resliver of 5Tb hdds then go for 5Tb hdds in RAIDz1
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Also, here is my zpool list with the FRAG column - not sure how accurate that is?
Code:
[victorhooi@freenas] /% zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
datastore     10.9T  9.66T  1.21T         -    22%    88%  1.00x  ONLINE  /mnt
freenas-boot  3.62G  2.28G  1.34G         -      -    62%  1.00x  ONLINE  -
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Aha, yes, you're right - it could take a while over Gigabit.

Hmm, doing back of the envelope calculations, 10 TB over 100 MB/s will take a little over a day - to be honest, I'm prepared to wait that time, if you think it's safer doing it this way (and it sounds like it'd be faster doing it this way, than doing the re-silver for each of the four-drives anyway). Even if it takes 3-4 days, that's OK for me.

Is there any preference for zfs send/receive versus rsync here? (Assuming I'm not using any zfs snapshots at the moment).

I don't think I'd have access to a box with four bays to borrow...lol. I would probably have to buy a new machine.
 
Joined
Oct 2, 2014
Messages
925
Aha, yes, you're right - it could take a while over Gigabit.

Hmm, doing back of the envelope calculations, 10 TB over 100 MB/s will take a little over a day - to be honest, I'm prepared to wait that time, if you think it's safer doing it this way (and it sounds like it'd be faster doing it this way, than doing the re-silver for each of the four-drives anyway). Even if it takes 3-4 days, that's OK for me.

Is there any preference for zfs send/receive versus rsync here? (Assuming I'm not using any zfs snapshots at the moment).

I don't think I'd have access to a box with four bays to borrow...lol. I would probably have to buy a new machine.
i dont have any experiance with rsync, and i have only ever done this procedure https://forums.freenas.org/index.php?threads/copy-move-dataset.28876/#post-190056 to move my stuff around. And you dont need bays....you'd just need ability to connect the hdds :p i cant find the photo i wnat...but i have done a migration of 8 hdds, from 1 RAID 10 to another , it took me 2 SAS to SATA break out cables 1 for each RAID 10 and a few molex to SATA power adapters lol

EDIT: found the photo i wanted, sorry for the crap photo quality it was recovered from my now dead iphone.

P.S This photo predates my FreeNAS server and all my other servers in my current lab lol, so when i say RAID 10 it was HW raid
 

Attachments

  • REC_IMG_00039.JPG
    REC_IMG_00039.JPG
    41.9 KB · Views: 248
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Is there any preference for zfs send/receive versus rsync here?
Yes, because a recursively replicated recursive snapshot preserves the full ZFS filesystem structure, i.e. datasets. With rsync all you get is the folder structure.
 
Status
Not open for further replies.
Top