Add HDDs to main pool

Status
Not open for further replies.

madtulip

Explorer
Joined
Mar 28, 2015
Messages
64
Hello everyone.

I currently have 4*4TB HDDs grouped as 1 vdev on which i created the main pool. FreeNAS is running from one USB-Stick which is mirrored to a 2nd USB-Stick. I would like to expand the vdev to 6*4TB. My case has 6 slots and available disk space is dropping. While building the 4*4 system i was (without reading about it) just assuming you can add disks to a vdev as that was possible i my old NAS setup. When switching to freeNAS i somehow got the impression this is just a bigger and better solution to the NAS i had before and it is in certain aspects - in others like this one though i am realy surprised. As i found out now it doesnt seem to be possible to add HDDs to vdevs already created. I could create a new vdev_2 and add that to the main pool though. Is that correct? That would result in every vdev using its own redundancies inside its vdev and thus be inefficient free space wise.

So i guess i have to dump my whole main pool to another system, delete my current vdev and create a new vdev with the 6 disk. I have a windows and a debian system here which could hold the data i guess. I would then create a new main pool on that new vdev and move my data back onto it. That seams unplesant and quite error prone so i thought it might be good to post and thoroughly think about my plans here in case there is an easier solution or im running into breaking something.

So apart from the bare dead data like media files which would do ok by just moving them there are the system specific files and settings which id like to preserve if possible. On the main FreeNAS OS this is things like user accounts, file permission, cif shares, snapshot tasks and other things you can set via the freeNAS html interface. Additionally to that i got 2 jails running which have loads of customized settings like apache web services.

It would be nice and very helpful if you could point me into the right direction which would be the best plan of action here.

thank you very much.
 
Last edited:

madtulip

Explorer
Joined
Mar 28, 2015
Messages
64
Thank you for the link. It was a good read and touched some topics i might double check on later.

Let me maybe rephrase my question. I already knew that i have to recreate the vdev containing my main pool. Before i do that i want to move the content of my current pool to another system. I would then kill the current vdev, add 2 HDDs and create a new vdev using all 6 disks and create a new pool. i would then move the data back onto that pool. Im not sure how to do that though. I thought i could do a snapshot of the main pool and copy the clone of that to one of my other systems.

I had problem restoring Jails from such clones earlier though.:
https://forums.freenas.org/index.php?threads/rollback-to-an-old-snapshot-of-a-jail.30495/

So the question is how to migrate the pool somewhere else while i replace the underliing vdev so that i dont have the hassle of reconfiguring everything afterwards?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Is your other pool Freenas (Or at least ZFS)? If so, look at replication. If not, look at rsync.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
So the question is how to migrate the pool somewhere else while i replace the underliing vdev so that i dont have the hassle of reconfiguring everything afterwards?
I've reconfigured my pool a couple of times. What I did each time was to attach a couple of large drives via eSATA and create a mirrored pool of them. Then I used zfs send | zfs receive commands to copy the snapshot to this temporary storage. Then I created the new pool and used zfs send | zfs recieve to copy the snapshot back. Combine this with a backup/restore of your configuration and you should be in good shape. To get this right you will have to read and understand the appropriate section of zfs man page. And do dry runs (-n) before you launch the commands for real.
I thought i could do a snapshot of the main pool and copy the clone of that to one of my other systems.
If one of your other systems is running an SSH server you should be able to send the snapshot stream to and from there instead.
I had problem restoring Jails from such clones earlier though
I was able to restore my VirtualBox jail, but I failed at restoring plugins. I no longer use plugins so that's not something I care about any more.

Before I did any of this last time around, I also took a screenshot of every configuration screen that had non-trivial settings, just in case.
 

madtulip

Explorer
Joined
Mar 28, 2015
Messages
64
I've reconfigured my pool a couple of times. What I did each time was to attach a couple of large drives via eSATA and create a mirrored pool of them. Then I used zfs send | zfs receive commands to copy the snapshot to this temporary storage. Then I created the new pool and used zfs send | zfs recieve to copy the snapshot back. Combine this with a backup/restore of your configuration and you should be in good shape. To get this right you will have to read and understand the appropriate section of zfs man page. And do dry runs (-n) before you launch the commands for real.

That is a good idea. Thank you!

If one of your other systems is running an SSH server you should be able to send the snapshot stream to and from there instead.
Ill have to do some research if this snapshot can be transfered to a box that doesnt have a ZFS file system - maybe to a .dump file? Else i try to get some additional HDDs i guess.

I was able to restore my VirtualBox jail, but I failed at restoring plugins. I no longer use plugins so that's not something I care about any more.
Is it sufficient to just create a "periodic snapshot tasks" from the dataset (i.e. main_pool/jails/jail_1) of the jail after powering the jail down ? Im a bit worried about the jails VM beeing tiied to the root freeNAS operating system and that a simple copy paste replace of the dataset would not be the right way to do it.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
research if this snapshot can be transfered to a box that doesnt have a ZFS file system
It can. zfs send creates a data stream that can be output to a file. zfs receive reads from a data stream which can be a file. When you pipe one to the other you're just by-passing creation of a file. This is basic *nix command line foo, so that's where you should focus your research.
Is it sufficient to just create a "periodic snapshot tasks" from the dataset
A periodic snapshot task is overkill. Just make an on-demand snapshot.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Have you decided whether the new pool is going to be RAIDZ1 or RAIDz2? If not, that question needs considering before you rebuild it.
 

madtulip

Explorer
Joined
Mar 28, 2015
Messages
64
Have you decided whether the new pool is going to be RAIDZ1 or RAIDz2? If not, that question needs considering before you rebuild it.
Currently its raidz2 with 4*4GB WDReds and im shooting for raidz2 with 6*4GB WDReds. When i was building it i got convinced that raid2z would be the minimal redundancy that my conservative feeling for data security would be satisfied with and that a later upgrade to 4 out of 6 HDDs effectively useable would be still a fair tradeoff in cost for space inefficiency.
 
Last edited:

madtulip

Explorer
Joined
Mar 28, 2015
Messages
64
It can. zfs send creates a data stream that can be output to a file. zfs receive reads from a data stream which can be a file. When you pipe one to the other you're just by-passing creation of a file. This is basic *nix command line foo, so that's where you should focus your research.

A periodic snapshot task is overkill. Just make an on-demand snapshot.
Thank you very much vor all the information Robert!
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Currently its raidz2 with 4*4GB WDReds and im shooting for raidz2 with 6*4GB WDReds. When i was building it i got convinced that raid2z would be the minimal redundancy that my conservative feeling for data security would be satisfied with and that a later upgrade to 4 out of 6 HDDs effectively useable would be still a fair tradeoff in cost for space inefficiency.
That's the same conclusion I came to. I was just asking because it wasn't clear from your posts.
 
Status
Not open for further replies.
Top