RAID10 extend + SLOG advise

Status
Not open for further replies.

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
Hi guys,
Would appreciate some help. We currently have a FreeNAS server serving a RAID10 pool via NFS to 2 ESXi servers.
The pool consist of 4 WD Reds (2 vdevs). I've bought 4 more disks and would like to extend my pool.

Now I can see I'm able to do that via the volume manager, I can choose my additional 4 disks to extend my current setup, but I can see that I can choose either stripe or mirror.
If we are talking about virtualization storage, what would be the recommendation? if I choose to go for stripe this means I would have 4 mirrored vdevs right but would have the risk that if any of the vdevs die, it would destroy the pool?

The second question is regarding SLOG. We currently have sync writes disabled with all the risks involved. I've got 2x DC3700 SSDs that I would like to use as our SLOG device so I can enable sync writes and not worry about data loss in case of issues. If I've extended my pool, I understand from reading the forums that it would create another pool and just connect it to the volume. This means I would need to add an SLOG for each pool? If not, would it be a good idea to stripe/mirror them?

Thanks.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You should be able to extend your pool with another pair of mirrors. For a total of 4 mirror vdevs.

Then you should be able to add a mirror of the p3700s as slog.

That should work fine.

I’m not 100% certain of the exact GUI methods to do this.
 

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
You should be able to extend your pool with another pair of mirrors. For a total of 4 mirror vdevs.

Then you should be able to add a mirror of the p3700s as slog.

That should work fine.

I’m not 100% certain of the exact GUI methods to do this.

Excellent so that means I can mirror my SLOG and won't have to do a separete one for each pool.

This leaves me only the first question, when choosing to extend the volume I can pick either mirror or stripe.
If I pick stripe I'll have 4 vdevs like you said. I'll get the additional space and performance but will have the 2 more failure points (vdev dead=pool dead)
If I pick mirror, I'll lose the capacity for the extension, but gain what? read speed? more resiliency for the pool?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If I pick mirror, I'll lose the capacity for the extension, but gain what? read speed? more resiliency for the pool?
If you have this kind of question, you should review the documentation again:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

Also, there is just a massive amount of great info linked from the manual.
Did you read the manual?
http://doc.freenas.org/11/freenas.html
 

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
If you have this kind of question, you should review the documentation again:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

Also, there is just a massive amount of great info linked from the manual.
Did you read the manual?
http://doc.freenas.org/11/freenas.html

I actually went through all this a while back and I think my wording of the question was problematic.
I see that the usual setting and recommendation is to stripe the vdevs to create the larger pool. I just wanted to know if there are some actual benefits for my implementation to use the additional disks as a mirror instead (That would out-weight the huge cost of losing 50% of my storage capacity)

Edit:
Scratch the above, guess I really did had to double-check my stuff. I thought that the volume manager showed the total drive space I WOULD have TOTAL, instead it shows the additional space I would get :).
Obviously I cannot use striped disks in a mirrored deployement.
Thanks for all the help and advice, it was greatly appreciated
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Excellent so that means I can mirror my SLOG and won't have to do a separete one for each pool.

Why do you have more than one pool?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I still have the impression that you have a basic misunderstanding that is likely creating a problem. I hope you will answer some questions about your configuration so that we can help you.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi guys,
Would appreciate some help. We currently have a FreeNAS server serving a RAID10 pool via NFS to 2 ESXi servers.
The pool consist of 4 WD Reds (2 vdevs). I've bought 4 more disks and would like to extend my pool.

Now I can see I'm able to do that via the volume manager, I can choose my additional 4 disks to extend my current setup, but I can see that I can choose either stripe or mirror.
If we are talking about virtualization storage, what would be the recommendation? if I choose to go for stripe this means I would have 4 mirrored vdevs right but would have the risk that if any of the vdevs die, it would destroy the pool?

The second question is regarding SLOG. We currently have sync writes disabled with all the risks involved. I've got 2x DC3700 SSDs that I would like to use as our SLOG device so I can enable sync writes and not worry about data loss in case of issues. If I've extended my pool, I understand from reading the forums that it would create another pool and just connect it to the volume. This means I would need to add an SLOG for each pool? If not, would it be a good idea to stripe/mirror them?

Thanks.

Please stop now before you make any changes.

I think there is a terminology barrier here that's going to result in you making a decision that is bad for your data's health.

Currently, you have four disks in a pool, set up as two mirror vdevs, like below:

Code:
		plissje										 ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			ada0										ONLINE	   0	 0	 0
			ada1										ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			ada2										ONLINE	   0	 0	 0
			ada3										ONLINE	   0	 0	 0


This pool can survive the failure of an individual disk in each vdev, but not two of the same. This is the normal "performance setup."

What you want to do is extend this pool, by adding more mirror vdevs, which results in the below pool:

Code:
		plissje										 ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			ada0										ONLINE	   0	 0	 0
			ada1										ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			ada2										ONLINE	   0	 0	 0
			ada3										ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			ada4										ONLINE	   0	 0	 0
			ada5										ONLINE	   0	 0	 0
		  mirror-3									  ONLINE	   0	 0	 0
			ada6										ONLINE	   0	 0	 0
			ada7										ONLINE	   0	 0	 0


This pool will retain the same "one disk can fail from each vdev" tolerance.

You absolutely do not want to select the "stripe" option for the additional ones, as this will result in adding the four disks as individual, single-drive vdevs:

Code:
		plissje_dun_goofed							  ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			ada0										ONLINE	   0	 0	 0
			ada1										ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			ada2										ONLINE	   0	 0	 0
			ada3										ONLINE	   0	 0	 0
		  ada4										  ONLINE	   0	 0	 0
		  ada5										  ONLINE	   0	 0	 0
		  ada6										  ONLINE	   0	 0	 0
		  ada7										  ONLINE	   0	 0	 0


This setup means that if any single one of ada4 through ada7 fails, your entire pool is toast.

You don't want that.
 
Status
Not open for further replies.
Top