Rebuilding Pools on smaller drives - Possible?

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
Edit: If this is the wrong forum to post in, please let me know.


Please excuse me if I use the incorrect terminology. My System is a home system with 4 users, a Plex server for the smart TV's, and an Archive of movies for those computer users who don't want to use Plex, and an archive of photos, documents, etc..

I had one 13TB pool and everything was created in it. (The original system configuration is in my profile, I think.)

I was able to purchase some additional drives and create a new pool and move (copy/verify/delete) 'my archive data' from the original pool to the new pool... Everything is fine so far.

So now I still have this huge pool with (13TB) with a small amount of data on it (5GB) which includes my Jail for Plex, some release and down load files from the 11.3_u1 update, some nightly backups of my FreeNas config db, and the user home directories for the users (which are currently empty).

My Goal is to end up with the following 3 pools:
Pool 1 - Nightly Backups, iocage (jails, logs, releases, download), and other misc files
Pool 2 - User homes
Pool 3 - My Archives (created from new drives)

What I would like to do is save everything from the Current Large Pool, delete the Current Large Pool, create two new pools (Pool 1 and Pool 2) from the existing "old drives" and then restore/re-create the original files into Pool 1 and then recreate the user home directories in the Pool 2.

I don't have the funds right now to purchase more disks to build Pool 1 and Pool 2 in addition to the Current Large Pool, but I do have an extra disk (empty) that has more than enough space to hold all of the existing data on the Current Large Pool.

Is this even possible? and if so, are their current instructions on how to do this?

Thanks in advance for any and all help you can give me.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Is this even possible? and if so, are their current instructions on how to do this?

zfs send | recv allows you to do that, creating a block level replica of the data you select at the level of pool or dataset (and can be recursive).

Although this article is titled as the opposite of what you want (I'll come back to that in a second), it will actually get you what you want:

in particular, these commands, but also the parts about renaming and re-importing the pools:
zfs snapshot -r tank@migrate
zfs send -R tank@migrate | zfs receive temp-tank

So now I still have this huge pool with (13TB) with a small amount of data on it (5GB)
Also don't forget that having some free space is a good thing with ZFS as you shouldn't really exceed 80% full if you want to use your system properly, so you're already over half way to full there.
Consider the 80% point when looking at your pool design for the other pools and think about why a separate pool is really needed before creating one. A pool with a lot of headroom is actually good.
 

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
Thanks, I'll look into those references

Also don't forget that having some free space is a good thing with ZFS as you shouldn't really exceed 80% full if you want to use your system properly, so you're already over half way to full there.
Consider the 80% point when looking at your pool design for the other pools and think about why a separate pool is really needed before creating one. A pool with a lot of headroom is actually good.
I think I have less than half way there... the pool is currently 13TB and the data is 5 GB... that's closer to .038%
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
I think I have less than half way there... the pool is currently 13TB and the data is 5 GB... that's closer to .038%
OK; fair enough... I haven't seen used % that low so mentally did the adjustment to TB. That's fine.
 

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
I read through the linked article, and yes it all sounds very doable and mostly straight forward. But... (there is always a but!)

As far as I saw, the solutions were geared towards either A) Having an external system available to hold the snapshot (data) temporarily or B) Having a new pool built already available on the new system.

In my case I have neither.

Can I use a dataset on an existing pool to temporarily hold the snapshot/data while I destroy the existing pool and recreate a new pool? Or am I missing something in how the send and receive commands work and how to set up the "receiving" area to hold the snapshot (data) before I destroy the old pool and build the new one?
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
Can I use a dataset on an existing pool to temporarily hold the snapshot/data while I destroy the existing pool and recreate a new pool? Or am I missing something in how the send and receive commands work and how to set up the "receiving" area to hold the snapshot (data) before I destroy the old pool and build the new one?

Yes, you can target the replication to any old dataset, local or remote, then replicate it over to the root or a dataset of the new pool once you've built it. Just don't replicate it for storage to the pool you're about to destroy :)

I'd recommend against targeting the boot pool if that's what you were thinking. The GUI won't let you. It'll work if you do it on the command line, but I don't know how insistent FreeNAS/TrueNAS is about the exact shape of the boot pool and wouldn't want to cause trouble doing updates in future.
 

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
Yes, you can target the replication to any old dataset, local or remote, then replicate it over to the root or a dataset of the new pool once you've built it. Just don't replicate it for storage to the pool you're about to destroy :)

I'd recommend against targeting the boot pool if that's what you were thinking. The GUI won't let you. It'll work if you do it on the command line, but I don't know how insistent FreeNAS/TrueNAS is about the exact shape of the boot pool and wouldn't want to cause trouble doing updates in future.
Wait, the boot pool? No I am not targeting the pool where FreeNAS is installed, which are the two SSDs installed in the system.

So I currently have two pools: (not using the drives the OS is installed on)
M1Pool 13Tib size, 5Gib used
Archive 13Tib size, 5Tib used
I want to store off M1Pool,
Reconfigure the drives, creating two smaller pools
Restore M1Pool

In the end I want to end up with
M1Pool 7Tib size, 4Gib used (approximately)
Archive 13Tib size, 5Tib used (no real change)
NewPool 7Tibsize, 1Gib used (approximately)

Reviewing @sretalla's post above and the link provided, I don't think this will work as described, for me. I cannot create the "temp-Tank" pool prior to the start of the process. I do not have the extra drives. I basically will be creating the new M1Pool and the NewPool out of the current drives from the current M1Pool.

I think this is the steps I need to follow (But I still have some questions): However, if I have totally gone off the rails here, and there is no alternative method, then I guess I will have to wait for 3 more months until I have the funds to purchase additional drives (and I'll still have to explain to the wife, um CFO, why I need "more drives"!) As I get answers to the questions I will update the steps, but will do so in new posts so that others can follow questions and answers and maybe see where I went wrong in my thinking.

-----------------------------------------------
Migrating 1 Pool when you want to split the existing Pool into two Pools.

This assumes that you have one Pool to split and a Second Pool with the space to hold the data from the Pool to split, but the Second Pool also contains data that you do not want to destroy.


Assumption is that "M1Pool" is the primary dataset name. "Archive" is the name for the other pool with data that I want to retain.

1. The system dataset needs to be moved off of "M1Pool". Use the GUI to select a new location other than M1Pool.
When I created the original "M1Pool" pool, I never thought I had changed the System Data set. And don't remember changing anything when I created Archive.
Just now I went looking through the GUI and under "System" I found "System Dataset". It currently points to Archive, but has the options of being freenas-boot, M1Pool, and Archive. Is freenas-boot a better option? Or best to leave it as "Archive? (Obviously M1Pool is NOT the correct option.)

2. Create a system config backup using the GUI. This will be needed later, because when you detach M1Pool, you will lose your share, snapshot and replication settings.
Don't see this in the GUI; what am I missing?
I do backup the config file nightly via a chron script, but reading the script it turns out it is saving it in a dataset on the M1Pool. I can modify it easily enough to save it on Archive until the move is done, and then modify it back. Do I need to create a separate one just for this migration? I understand the "Share" settings, but since I will be recreating the M1Pool with the same name will I still loose those? Will I loose ALL share settings? or just the share settings that currently point to datasets in the M1Pool?

3a. Use the GUI to create a snapshot of the dataset you want to move. If you want to move everything, select the root dataset.
Under Storage -> Snapshots, I can create a snapshot of M1Pool and select Recursive as I want to move everything.
I am the only one using the system at present (I kicked everyone else off, so updates to the system should be non-existent.) But how do I direct where to save it, since I will be destroying the M1Pool pool before I restore it?
3b. Alternatively, you can use the CLI to create the snapshot and then replicate manually.
Code:
zfs snapshot -r M1Pool@migrate
zfs send -R M1Pool@migrate | zfs receive temp-M1Pool


My understanding is that the first command
Code:
zfs snapshot -r M1Pool@migrate
creates a recursive snapshot of the M1Pool and names it "migrate", saving it to the M1Pool pool.

Then the second command
Code:
 zfs send -R M1Pool@migrate | zfs receive temp-M1Pool
will send the snapshot, and all associated data, structures, etc. over to the temp-M1Pool and recreate the file structure in the temp-M1Pool.


Q1: Since I don't have a temp-M1Pool anyplace, and no way to create the Vdevs for it; can I create a dataset under Archive named temp-M1 to receive it? I really don't want to "receive and expand the snapshot into OtherData (which I assume would wipe out the existing data). In that case would the command become:
Code:
 zfs send -R M1Pool@migrate | zfs receive Archive/temp-M1
of course then we are back to the issue of the data is replicated but it really isn't a new pool that can be mounted/unmounted.
Q2: would it make sense in this case to change the second command to:
Code:
zfs send -R M1Pool@migrate | /mnt/Archive/migrateS
Thereby storing the "stream" into a the file named migrateS in the other pool?
Assuming the answer to Q2 is Yes, then to restore that data I would do the following:
4. Once replication is complete. It is time to detach the M1Pool. Use the GUI to "detach volume" for M1Pool, when the confirmation window pops up CHOOSE THE OPTION TO DESTROY. This is the point of no return, so know what you are doing before confirming.

5. Using the GUI, go into the storage tab and create the New M1Pool, using a subset of the discs from the previous M1Pool, or a completely different set of disks. If you end up with additional drives, you can create a new pool using those drives.

6. Using the CLI issue the following command to recreate the M1Pool data structures and data from the previously saved snapshot migration file.
Code:
 zfs receive M1Pool < /mnt/Archive/migrateS 


7. Once the M1Pool is imported, you can either manually recreate your shares, or you can restore from the configuration backup we made in step 2.

8. Remove/delete the /mnt/Archive/migrateS file as it is no longer needed.

What am I missing or doing wrong here?
 
Top