Migrate to smaller disk

elangley

Contributor
Joined
Jun 4, 2012
Messages
109
Hi All,

In FreeNAS-11.1-U7 I have a pool with two 6TB spinning disks in a mirror that I would like to replace with two 2TB ssd's. The there is about 1TB of data on the pool so there will be enough room on the 2TB pool.

Ideally I would replace the disks one at a time but I why I try to use the "Replace" function in the GUI I get an error.

How can this best be accomplished?

TIA,

~eric
 
Joined
Oct 18, 2018
Messages
969
If you get an error it is helpful if you post exactly what that error is. Also,I think you'll have to create a new pool and migrate your data. ZFS wont allow you to replace a larger disk with a smaller disk as far as I am aware.
 

elangley

Contributor
Joined
Jun 4, 2012
Messages
109
[MiddlewareError: Disk replacement failed: "cannot replace gptid/bf212cfe-18ae-11e8-8a77-000c294ced6a with gptid/407dbc33-6375-11e9-a43c-000c294ced6a: device is too small, "]

Thanks, now that I see the entire line of the error by pasting it it is obvious that disk size is an issue.

I was hoping to swap out and resilver one disk at a time to avoid the downtime associated with copying data. If there are any other solutions I'd be interested in hearing them.

~eric
 

scrappy

Patron
Joined
Mar 16, 2017
Messages
347
ZFS does not currently allow for smaller drives to replace larger ones in a zpool. Your best option is to create the SSD zpool mirror separately, do a ZFS snapshot of your current datasets from the spinning drive zpool, then perform a ZFS send/recv to send all the things over to your new SSD zpool.
 

elangley

Contributor
Joined
Jun 4, 2012
Messages
109
Thanks for the input, super helpful. I have ZFS Send Recv working to another volume/dataset on the same system. Here is the method I used to keep the data accessible during the migration, which may help someone else.

1) From Storage snapshot the source volume with the name: initial
2) From SSH: zfs send volume/dataset@initial | pv | zfs recv -F other_volume/dataset
Data is accessible on source volume during the transfer
The pv switch shows the amount sent, transfer rate, and elapsed time.
Once the copy job finishes...
3) From Storage snapshot the source volume with the name: incremental
4) From SSH: zfs send -i initial volume/dataset@incremental | pv | zfs recv other_volume/dataset
5} In Storage/Snapshots use the Rollback Snapshot to promote it as the live dataset and setup Shares.
6) Point clients to the new volume

~eric
 
Top