Add vdev to existing server vs. adding 2nd server

Status
Not open for further replies.

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
IIRC it has to do with the prevalence of compression and that messing up the block boundaries, which was why the number of drives per vdev suggestion existed.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Thanks Guys! That answered the question for me!!
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I vote for a 3rd vDev of RaidZ2. Of course it looks like you may be out of drive slots on that case, so a JBOD is under consideration? That or get a bigger case which houses more drives, has bigger PSUs and can take your parts..

As far as JBODs go, I prefer to have them use their own Pool/Volume just for the safe feeling that part of any vDev is not housed somewhere else. But that is just me...


Mirfster - Thanks for the info, quick question for you on your preference to have a new pool with a separate JBOD. My thought was that freenas hates it when you go over 80% pool utilization. I am near that mark now, hence the reason for the expansion.

I know that is you lose a vdev you lose the pool and right now both of my vdevs are in a single chassis on a single LSI card. I have a separate chassis with 12 more bays and I was going to add six more drives, raidz2 and then add it to my existing pool (more for ease of use with all of my software than anything else).

My main concern is if my jbod chassis has a redundant power failure (or the controller that it is connected to fails), I assume since I have lost that entire vdev at that point, I would have also lost my entire pool. Is this correct or does Freenas have some safety feature that would help in this scenario?

If I make a separate pool from each separate jbod, this complicates my life with the software I am running since it all likes to point to a single mount point (/mnt/media) for processing and I am not sure there is a way around it with what I am running. But the thought of adding more moving parts to a single pool where the loss of any of those parts (power failure on a jbod shelf) can cause a total loss of my pool also causes me to pause and think.

What is the best practice for doing multiple jbods on a single server..?

Thanks!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Technically you will not *lose* the Pool if the JBOD were to go down. You would of course get "Critical" messages from FreeNAS that the Pool is "Unavailable", but once you get the JBOD powered back on and the drives are accessible the Pool will come back online. It may take a reboot, but not sure since I do not have mine configured as such.

As far as adding a JBOD, see Post #8 in "External SAS Raid Controller + SAS Expander" for what I did.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Ah, got it. I understand now. Thank You! I didn't stop to think that freenas would take the entire pool offline the minute it lost a vdev, but that makes sense now.

Thank you again for the help!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
You will have 18 drives correct? For 3 raidz2 6 drive vdevs?

You should split vdevs so 1/3 is in the jbod. Since you can lose a third of your vdevs, then you will still be okay if you lose the jbod.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
You will have 18 drives correct? For 3 raidz2 6 drive vdevs?

You should split vdevs so 1/3 is in the jbod. Since you can lose a third of your vdevs, then you will still be okay if you lose the jbod.

I currently have 12 drives split as 2 x 6 Drive RAIDZ2 vdevs. All of those drives are in my first chassis. My second chassis has 12 more drive slots. I am going to add 6 more drives right now and eventually another 6 drives down the road.

My only thought about changing my current setup the way you recommend is what happens if I lose power to the jbod for example?

Right now, if I leave it as a third 6 drive RAIDZ2 vdev and add it to my current pool and it blinks offline, I lose my pool until I can get the drives back online...no biggie...it's for my plex server anyway. When I power it back up (any maybe reboot) the pool should see all the drives and come back online.

If I have split my vdevs and then I lose the jbod, the pool would stay online but in a degraded state, but once I bring the jbod back online I think now I would have 6 drives that would need to resliver, correct...? I'm thinking that might take a very lone time and I would have zero redundancy in any of my vdevs while that was happening.

Or do I have it wrong?

Thanks!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Yes, but basically ZFS knows what it wrote while the drives were offline and very quickly rewrites it

But yea, if you lose 2/3s of your drives it's possible to have the pool stay online, but without further redundancy
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I currently have 12 drives split as 2 x 6 Drive RAIDZ2 vdevs. All of those drives are in my first chassis. My second chassis has 12 more drive slots. I am going to add 6 more drives right now and eventually another 6 drives down the road.

My only thought about changing my current setup the way you recommend is what happens if I lose power to the jbod for example?

Right now, if I leave it as a third 6 drive RAIDZ2 vdev and add it to my current pool and it blinks offline, I lose my pool until I can get the drives back online...no biggie...it's for my plex server anyway. When I power it back up (any maybe reboot) the pool should see all the drives and come back online.

If I have split my vdevs and then I lose the jbod, the pool would stay online but in a degraded state, but once I bring the jbod back online I think now I would have 6 drives that would need to resliver, correct...? I'm thinking that might take a very lone time and I would have zero redundancy in any of my vdevs while that was happening.

Or do I have it wrong?

Thanks!
Once the driver come back there will be a very quick resilver that happens to catch up on the writes that got missed during the outage. It might even say you lost Xbytes because it had no place to write it to disk.

Sent from my Nexus 5X using Tapatalk
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Yes, but basically ZFS knows what it wrote while the drives were offline and very quickly rewrites it

But yea, if you lose 2/3s of your drives it's possible to have the pool stay online, but without further redundancy

I guess that would be my concern. As a pilot I am away sometimes a week or more at a time and if the entire pool went offline as a result of a failed jbod, that would be better for me than to have the pool go into degraded state, continue to run and read and write data and then lose just one more drive and then lose everything!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Then the easy thing to do is just stack the vdevs.

If you have backups, one of the best things to do for learning is to test the failures. Ie turn off the jbod and see what happens.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Thanks Everyone for the great info. Based on my configuration, I decided to extend my pool to include the vdev in the jbod now giving me 65.2TB RAW and 42TB usable.

Code:
[root@plexnas] ~# df -h
Filesystem						Size	Used   Avail Capacity  Mounted on
vol1/media						42T	 23T	 20T	53%	/mnt/vol1/media
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks Everyone for the great info. Based on my configuration, I decided to extend my pool to include the vdev in the jbod now giving me 65.2TB RAW and 42TB usable.

Code:
[root@plexnas] ~# df -h
Filesystem						Size	Used   Avail Capacity  Mounted on
vol1/media						42T	 23T	 20T	53%	/mnt/vol1/media
zpool list should be used instead of df most of the time.

Sent from my Nexus 5X using Tapatalk
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
zpool list should be used instead of df most of the time.

I have found the df -h gives me actual space available (20T) while zpool list shows me total bytes available without taking into account redundancy (31.4TB), so the number does not mean that much to me. I guess I could have used zfs list instead of df -h and gotten the same number.
 
Status
Not open for further replies.
Top