Expanding storage

Status
Not open for further replies.

conanius

Cadet
Joined
Dec 28, 2017
Messages
3
Hi Folks,

Newbie, First Post, Usually a windows user. You get the background.

I've got a Freenas Box (running 11.1) that I want to expand the storage on.

Currently - and I think this is all the right terminology - I've got 4x4TB Volume running RaidZ1.

I've got my hands on 8x2TB drives that I want to use to expand my storage. My plan was to create at new vdev with these 8 drives running RAIDZ2.

Now. What I don't really understand is, how will Freenas 'use' this extra space. Will it put the two vdevs in the same ZFS Pool, and then just spread the data across the two vdevs ? Or will I just need to manually move the data around.

I thought I knew how this all worked (that I'd just move the data around manually) but I read an article the other day on expanding storage with additional vdevs and its now totally confused me.

Sorry for such a newb question. I hope to look back on this in years to come and thing 'oh my. I really did post that didn't I?'
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Now. What I don't really understand is, how will Freenas 'use' this extra space. Will it put the two vdevs in the same ZFS Pool, and then just spread the data across the two vdevs ? Or will I just need to manually move the data around
If you expand the pool with a second vdev, ZFS will automagically distribute new writes to maximize performance - in effect, distributing with a strong preference for the freer vdev. You cannot distribute old data, but - if you really need to - you can move it to a new dataset, delete any old snapshots, and it will be distributed across the entire pool.

If you make a new pool, you have to manage things yourself.

Currently - and I think this is all the right terminology - I've got 4x4TB Volume running RaidZ1.

I've got my hands on 8x2TB drives that I want to use to expand my storage. My plan was to create at new vdev with these 8 drives running RAIDZ2.
It's generally not a great idea to mix and match vdev topologies. It works, but it's a sort of "yeah, we don't bother testing this, it may be full of edge cases we didn't care about, etc. etc. etc." deal.
In your case, running two separate pools may not be such a bad idea.
 

conanius

Cadet
Joined
Dec 28, 2017
Messages
3
Thanks for the detailed and quick reply - really appreciated.

The reason I ask about redistribution is the current vdev is at 91% capacity, and I'd read that ZFS would be most pleased if I could keep that below 80%.

I've got an easy way to segregate the data amongst the vdevs, but that would require me to tell Freenas to put all the stuff in the 'TV Shows' folder on one vdev and all the stuff in the 'Movies' folder in the other.... at that point i think I've kind of missed the obvious of users only see the file system and shouldn't be making such links... Is that the right way to see this?

I take on board your point about the vdevs being different consistencies... does it make it even more cringe that I'm splitting this new vdev over two controllers (6x discs on the onboard P410i controller, 2x discs on the onboard SATA controller).

The baby jesus wept, etc.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
does it make it even more cringe that I'm splitting this new vdev over two controllers
No, that doesn't make a difference.

I've got an easy way to segregate the data amongst the vdevs, but that would require me to tell Freenas to put all the stuff in the 'TV Shows' folder on one vdev and all the stuff in the 'Movies' folder in the other.... at that point i think I've kind of missed the obvious of users only see the file system and shouldn't be making such links... Is that the right way to see this?
You can't override ZFS' distribution. If you want to, you need separate pools.
 

conanius

Cadet
Joined
Dec 28, 2017
Messages
3
Ok thanks for the clarification. I wonder in which case then if to make life easier I’d be better making a brand new pool and just putting some do the data on that - thus removing your edge case pointer
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
You could also add 2 4X2TB RaidZ1 vDevs to your existing pool. That way you are not mixing vDev types and you have a single pool. OR, with the new drives, make a new 6 drive Z2 pool, move your data over (not sure if space will allow... but possible external storage for a bit?), and then abandon the old Z1 pool. Once this is done, combine the 4 4TB drives and the remaining 2 2TB drives to extend the new pool with another 6 drive Z2 vDev.

Soooo many options. :)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You could also add 2 4X2TB RaidZ1 vDevs to your existing pool. That way you are not mixing vDev types and you have a single pool. OR, with the new drives, make a new 6 drive Z2 pool, move your data over (not sure if space will allow... but possible external storage for a bit?), and then abandon the old Z1 pool. Once this is done, combine the 4 4TB drives and the remaining 2 2TB drives to extend the new pool with another 6 drive Z2 vDev.

Soooo many options. :)
In my humble opinion, this is a better option. Even if the second vDev has mixed sizes, you have an upgrade path by replacing 2 x 2TB drives in that second vDev with 2 x 4TB and it will grow the pool.

Please note that mixed drive sizes in a single vDev IS fully supported by ZFS. But, it uses the smallest capacity drive to determine the vDev size. So 4 x 4TB + 2 x 2TB will appear as 6 x 2TB drives.
 
Status
Not open for further replies.
Top