I have a pool made of 12 drives in striped mirrored vdevs. I'd like to make it an 11 drive RAID-Z2 with a hot spare (there are 2 other pools on this machine). I'm trying to not impact the shares, snapshot task, replication tasks and jail configs.
I believe the basic steps are:
1. create snapshot 2. replicate snapshot 3. destroy pool/vdevs 4. create new pool/vdev 5. replicate snapshot to new pool/vdev
I'm wondering what the lowest impact will be to my jails, shares, snapshot tasks, and replication tasks. Do I need to destroy the entire pool and start from scratch, or can I remove just the vdevs?
Or can I make a backup of the config, destroy the pool, create a pool with RAID-Z2 vdev and the same name, and then restore the config?
I guess I'm wondering how destructive removing the pool is to the other parts of the FreeNAS configuration.
So I ran a quick test on a test system last night. After importing the new (old) pool, the snapshots and replication schedules don't show up. It appears that the config restore worked though. Crossing fingers.
Did you know that only periodic snapshots can be configured for replication (not the one time manual one, grrrr)? Which not only impacts the migration, but more importantly the return.
Did you know that only periodic snapshots can be configured for replication (not the one time manual one, grrrr)? Which not only impacts the migration, but more importantly the return.
CLI? Not a problem - you'd have to use it anyway. You should be able to pipe zfs send's output right into zfs receive (with appropriate parameters, naturally).
The first import will also have to be from the CLI, as the pool has to be renamed. After that is done, export via the CLI and reimport it with the GUI to get things working normally again.
The first import will also have to be from the CLI, as the pool has to be renamed. After that is done, export via the CLI and reimport it with the GUI to get things working normally again.
Since I've got 17TB to move tonight, would you mind just double checking these steps? I found that even though tank@FOR-MIGRATION-20151030 snapshot doesn't seem to exist in backup-tank when I run zfs send -R backup-tank@FOR-MIGRATION-20151030 the individual datasets are created and replicated. Surprisingly though, I had a reservation set on RESERVED that didn't come across in the replication (it was ~300K).
0. change system dataset location to backup-tank, and make a system backup (shares and whatnot get deleted when the pool is deleted).
1. replicate "tank" dataset to "backup-tank" - DONE
2. export and destroy (no need to wipe the disks) tank using GUI
3. create "new-tank" using GUI
4. use CLI zfs send -R backup-tank@FOR-MIGRATION-20151030 | zfs receive new-tank
5. use GUI to export new-tank
6. use cli to rename to tank "zpool import new-tank tank")
7. use CLI to export tank: zpool export tank
8. use GUI to import volume 'tank'
9. change system dataset location to tank
10. restore config backup to re-enable snapshots, replication, etc
Well, it hasn't gone without a hitch. I'm experiencing the ole "replication renders the pool unmounted" issue. fun times. It sounds like it's related to permissions or a faulty snapshot and presents itself in the logs as:
Code:
Nov 3 00:01:08 freenas1 collectd[8673]: statvfs(/mnt/backup-tank/RESERVED) failed: No such file or directory
Nov 3 00:01:08 freenas1 collectd[8673]: statvfs(/mnt/backup-tank/VM) failed: No such file or directory
Nov 3 00:01:08 freenas1 collectd[8673]: statvfs(/mnt/backup-tank/VM/2) failed: No such file or directory
Nov 3 00:01:08 freenas1 collectd[8673]: statvfs(/mnt/backup-tank/backup) failed: No such file or directory
And from a command line a ll /mnt/backup-tank is empty. If I run zfs mount -a, then the folders reappear until the the next replication.
So, my issue now is I want to get rid of all my snapshots, but there are a bunch of them (3500) and the GUI can not handle it (shift-clicking all).
Will FN get all messed up if I destroy the snapshots from the CLI?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.