Adding vdev to existing pool, does data from first vdev spread across second vdev?

Status
Not open for further replies.

frizzle02

Cadet
Joined
Oct 23, 2016
Messages
3
I started working with FreeNAS about a month ago and got spun up pretty quick on it's interface and quirks, but I want input before I proceed with my NAS as I'm dealing with tons of data. This NAS is designed to hold media for my Plex server, so it's not critical as I have all my data backed up, I just don't feel like rebuilding my pool anytime soon.

I have a DL180se G6 with a 6 x 5TB raidz1 as my first vdev. I have another 6 x 5TB disks to add to fill the box (I'm copying data from the second set of disks to the first vdev then setting the second set of disks up as the second vdev). I know I can expand a pool by adding a second vdev (yes, I'll make it raidz1 like the first), but I've read on a few forums that the data from the first vdev will be spread across the second vdev, to create a huge stripe across all 12 disks. Is this true, or does data existing on the first vdev stay there, and all data copy to both vdevs from here on out? If it does spread out, I have 22 TB of usable space on the first vdev (which is filled), and that would take days and days to spread out over to the 2nd vdev.

If it does spread across the second vdev automatically, I'd rather just create a second pool and create that vdev there, as it really doesn't bother me to create a second windows share and split up the drives. Thoughts?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
ZFS doesn't move data at rest. Ever. New data will be balanced out across the vdevs, and as modifications/deletions are made the modifications will be balanced out too.

Theoretically, if you wanted to, you could mv the data from one dataset to another, and that would force it to be moved and rebalanced. But generally... don't worry about it.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Above 90% full affects performance. 80% is when the GUI alerts you that you are getting close.

Sent from my Nexus 5X using Tapatalk
 

frizzle02

Cadet
Joined
Oct 23, 2016
Messages
3
Data that's already on the first vdev will stay there. Roughly speaking, ZFS will attempt to balance data across the two vdevs by writing new data mostly to the second vdev, until it's about as full as the first.

Use of RAIDZ1 with disks that size is severely unpopular on the forums -- if a disk fails then you are vulnerable to another single disk failure destroying your entire pool until you've replaced the disk and resilvered the replacement, which will take a nerve-wrackingly long time with a 5TB disk in a RAIDZ vdev.

Also you say your existing disks are "filled". Going above about 80% full risks permanently affecting your pool's performance; if you are already there you may want to research this and consider rebuilding your pool.

As I've read on the forums, I see raidz1 is unpopular on the forums and all articles I read. I thought about doing a raidz2 but I'm trying to get the most storage out of my setup as possible while having 'some' redundancy. I have backups of all my data, so rebuilding my raidz1 into a raidz2 or striped mirror isn't out of the picture, it'll just take another 5 days to copy all ~20TB of data to it if that day comes where 2 drives fail.

Also, from what I've read about the 80/90% threshold affecting performance, is that % on the pool space or the vdev space? And as I'm literally copying files onto the pool and not modifying them (media storage only, no VM's or databases), do I need to be worried? Isn't the threshold really just concerned with fragmentation issues. I won't see much fragmentation as it's only WORM (write once read many).
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Well when you have only 1vdev filling it is the same as filling the pool.

Sent from my Nexus 5X using Tapatalk
 

frizzle02

Cadet
Joined
Oct 23, 2016
Messages
3
Well when you have only 1vdev filling it is the same as filling the pool.

Sent from my Nexus 5X using Tapatalk

True, but as I mentioned before, I wanted to add a 2nd vdev to my only pool. wasn't sure if it the threshold would then apply to the whole pool (which would then be ~48% filled) or if the threshold applies to the vdevs themselves.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
When a pool gets to 90% full ZFS switches to using a different algorithm to find the next free block to write into. The algorithm is a more brute force algorithm designed to avoid chronic fragmentation. This algorithm takes a relavtively long time to find a free block, thus it slows writing, not reading.

BUT, at 100% full bad things happen. Like not being able to delete files, and not being able to mount your pool until you do delete files. Which creates a problem.

Thus, you should be planning your capacity expansion when you get to 80% and you should be implementing it when you get to 90%.

Starting at 90% is a bad idea ;)

But I think you aren't planning to start at 90%
 
Last edited:
Status
Not open for further replies.
Top