Build advice on multi-pool NAS.

Status
Not open for further replies.

19norant

Dabbler
Joined
Dec 15, 2016
Messages
26
I have some options here and am mulling over how to approach my first build. I would like to do this right from the start and get all drives in place from the get-go.

Drives available:
  • 6TB x1
  • 5TB x1
  • 4TB x5
  • 3TB x5
  • 2TB x8
  • 1TB x12
  • 750GB x2
  • 256gb SSD x1
  • 60gb SSD x2
  • 16gb USB stick (to boot FreeNAS)
System specs:
  • Rosewill RSV-L4500 (15 bays)
  • GA-MA790GP-UD4H (6 SATAII ports)
  • AMD Phenom II X6 1045T
  • 16GB DDR2
  • IO-PCE9705-8I x2 (16 SATAIII ports)
I am doing my best to use parts I have hanging around. I suspect one of the first pieces of advice will be to ditch my motherboard so I can throw more RAM at the system, and that's fair. It may not even be out of the question, just trying to save a few hundred bucks.

The main pool will be for a Plex Media Server. Being that it isn't horrible if I lose a movie or TV episode, I am planning to go with RAIDz1 for this pool. I have toyed with the idea of doing this with RAIDz2, but I value usable space over data preservation. So z1 is the plan here.
  • plex_pool (36tb)
    • vdev0: 4tb x5
    • vdev1: 3tb x5
    • vdev2: 2tb x5
The extra pool would be for use by a little Apache Spark cluster I am looking to setup. Allowing all Spark nodes to have a common mount will make things easier. As performance and data preservation are a bit more desirable on this pool, I will be mirroring my vdevs. For this I am planning to have:
  • spark_pool (7tb)
    • vdev0: 6tb + 5tb
    • vdev1: 2tb + 2tb
I would then throw the 256gb SSD on as the L2ARC and the 60gb SSD on as the ZIL. I admit that I don't know if the L2ARC is necessary for my setup. Maybe I should wait and see what my cache hit rate is before making that move? I'm also not sure, from what I've read, if the L2ARC & ZIL can be shared by both pools or not. Some advice/guidance there would be appreciated as well.

For the observant of you out there, you may be thinking... "He has 15 bays and 21 drives." Yes... this is a problem I will have to deal with. I think there is enough room in the case (and enough holes) so I can finagle mounting some more drives in there. We'll see. If this obstacle proves too difficult, I may have to scrap the spark_pool and go hardware RAID in my SansDigital tower for that. We'll see if I can overcome the physical limitations.

So that's that. Looking forward to hearing whatever feedback the community has on my first build.

Cheers!
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Being that it isn't horrible if I lose a movie or TV episode, I am planning to go with RAIDz1 for this pool. I have toyed with the idea of doing this with RAIDz2, but I value usable space over data preservation. So z1 is the plan here.
This is a knee jerk plan, you need read and become familiar with ZFS RAID levels as well as the hardware guide @m0nkey_ linked to.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
Hi @19norant

Take a look at the guide on the resources tab linked by @m0nkey_ and pick a better mother board. I'm sure you have done some reading, but it's always good to check the recommendations.

Understand you want to use what you have laying around, but proper planning is always the way to go, specially to safe guard your data.

Food for thought would be getting your bigger disks in raidz2 volume. That would give you 20TiB of data and use 7 of your 15 bays. Build another with the remain bays with the 3TiB and 2TiB disks, also in raidz2 - 8 of them. That would cap the disks at 2TiB for future expansion, so about 12TiB. In the future you could replace those with bigger ones and increase the volume size after replacing them all.

@cyberjock has a good slide that explain all of that here: https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Use one of the smalls SDDs as your boot device and plan for about 32 GBRAM. If money is a problem, as always is with me, start with 16, but plan based on your motherboard memory slots to get there (i.e. if you have 4 slots, start with 2x8GBRAM).

While you are reading the recommended hardware list, think about couple good HBA controllers to allow that many disks. the normal ones allow for 8 ports and the above already has 16 disks. Considering the motherboard will have some ports, you might be good with two of those.

Also make sure to size your PSU. Check motherboard/cpu/fans/hdds specs and don't save money getting a cheap one.

I'm pretty sure you know all of the above, but considering I have both of my FNs on the same cabinet, I couldn't resist exercising my fingers... and one last thing ... that case is like a 80's Cadillac engine compartment, so lots of space, but has four fans in the middle of it and that will take some of the space you are thinking to add disks. Don't block ventilation trying to stuff more 1TiB disks in it.

Good luck!
 
Last edited:

19norant

Dabbler
Joined
Dec 15, 2016
Messages
26
Thanks, guys. You've all basically told me what I knew/suspected (but didn't want to admit). I'll circle back and look at some options for a motherboard replacement. Cheers!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I admit that I don't know if the L2ARC is necessary for my setup. Maybe I should wait and see what my cache hit rate is before making that move?
Yes, real world test, then add RAM if performance is inadequate, then consider L2ARC after maxing out RAM.
 

19norant

Dabbler
Joined
Dec 15, 2016
Messages
26
This is a knee jerk plan, you need read and become familiar with ZFS RAID levels as well as the hardware guide @m0nkey_ linked to.
I understand that z3 is better than z2 which is better than z1. If I were simply saying, "I am going to use to z1 because I want more usable space no matter what" then I'd agree with the "knee jerk" label. But I'm saying I plan to use z1 because partial data loss isn't really a critical issue. When data is more valued (in my Spark situation) I've outlined a different approach -- which, I think, shows that I am looking at things case by case..

So while I understand that z1 isn't great for an enterprise approach, why do you think it is a knee jerk plan for story media that is easily replaceable? The FreeNAS wizard even recommends z1 for media storage. If I understand the risks and acknowledge the recovery process, I don't think it's exactly knee jerk.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I understand that z3 is better than z2 which is better than z1. If I were simply saying, "I am going to use to z1 because I want more usable space no matter what" then I'd agree with the "knee jerk" label. But I'm saying I plan to use z1 because partial data loss isn't really a critical issue. When data is more valued (in my Spark situation) I've outlined a different approach -- which, I think, shows that I am looking at things case by case..

So while I understand that z1 isn't great for an enterprise approach, why do you think it is a knee jerk plan for story media that is easily replaceable? The FreeNAS wizard even recommends z1 for media storage. If I understand the risks and acknowledge the recovery process, I don't think it's exactly knee jerk.
Evidently you took offense at the term knee-jerk, and it was not my intention to suggest an insult by using that term:oops:
The main pool will be for a Plex Media Server. Being that it isn't horrible if I lose a movie or TV episode, I am planning to go with RAIDz1 for this pool. I have toyed with the idea of doing this with RAIDz2, but I value usable space over data preservation. So z1 is the plan here.
  • plex_pool (36tb)
    • vdev0: 4tb x5
    • vdev1: 3tb x5
    • vdev2: 2tb x5
When making this comment, I was referring to the size of the disks you intend to use
AND the overall size of the 36TB volume. If you have read the warnings and don't mind the risk,
then using RAIDz1 with that hardware...
What we try to avoid in here (the forum) are giving the green light approval of practices that we
historically know to be less that optimal when it comes to risking data loss of any kind, media or otherwise.

Again, my used term of knee-jerk was not appropriate. Please consider it retracted, but please know that I also
think you might give the RAIDz1 another bit of risk vs. reward study.
Where the hell is @DrKK when you need him.
 

dcevansiii

Dabbler
Joined
Sep 9, 2013
Messages
22
But I'm saying I plan to use z1 because partial data loss isn't really a critical issue.

Just making sure that you understand it is NOT partial data loss. It is TOTAL data loss.

If your data is stored on X disks, running raidZ1 and you lose two disks, you have total data loss. *You lose the entire pool.*

You might want to brush up a little on ZFS basics. It is not like other file systems.

I just want to make sure you understand that. And also your time to recreate the lost data is actually pretty valuable. A fact which becomes more apparent the older you get.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
If your data is stored on X disks, running raidZ1 and you lose two disks, you have total data loss. *You lose the entire pool.*
The OP suggested his Plex volume would consist of three Vdevs. Just to clarify
plex_pool (36tb)
  • vdev0: 4tb x5
  • vdev1: 3tb x5
  • vdev2: 2tb x5
In that scenario the second drive lost would have to be within the same Vdev as the first drive that failed
in order that the loss of the Vdev resulted in the loss of all the data on all three Vdevs making up the volume.
But with your mention of time wasted being valuable, I'm right there friend:)

Darn it! I'm tryin' to sound like this data loss, is a really bad thing, but sadly I don't think I pulled it off:rolleyes:
 

19norant

Dabbler
Joined
Dec 15, 2016
Messages
26
Evidently you took offense at the term knee-jerk, and it was not my intention to suggest an insult by using that term:oops:
Darn it! I'm tryin' to sound like this data loss, is a really bad thing, but sadly I don't think I pulled it off:rolleyes:

@BigDave No worries. :) I think I was drunk when I responded, so I probably reacted in a bit more of a knee-jerk (ha!) fashion that I would normally.

Here's, I think the crux of it all...

I was justifying the use of z1 for my plex pool because if there is a sector issue during rebuild, then I'll lose a few files possibly and have to replace them. This is what I am okay with.

However, if there is a drive issue (within the same vdev) while rebuilding, now my whole pool is shot. This wouldn't be the end of my world... but I would be sad for a while. :eek: Did that make it sound bad enough? Heh.

As I think through this more, I'm toying with the idea of selling some of the 1TB and 2TB drives I have as extras. Maybe with a 4bay and/or 8bay SansDigital tower to help someone else get started into the mass storage hobby. That should cover most of the cost of grabbing some 3TB or 4TB drives.

Then I could go:
  • vdev0: 3tb x10 (z2)
  • vdev1: 4tb x5 (z2)
or
  • vdev0: 3tb x5 (z2)
  • vdev1: 4tb x10 (z2)
That would give me either 36TB or 41TB.

Thanks for bearing with me as I use the forum as a sounding board that forces me to think through my decisions/approach. And, @dcevansiii, I'm definitely old enough... haha.

Cheers!
 
Last edited:

19norant

Dabbler
Joined
Dec 15, 2016
Messages
26
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
As I think through this more, I'm toying with the idea of selling some of the 1TB and 2TB drives I have as extras. Maybe with a 4bay and/or 8bay SansDigital tower to help someone else get started into the mass storage hobby. That should cover most of the cost of grabbing some 3TB or 4TB drives.

Then I could go:
  • vdev0: 3tb x10 (z2)
  • vdev1: 4tb x5 (z2)
or
  • vdev0: 3tb x5 (z2)
  • vdev1: 4tb x10 (z2)
That would give me either 36TB or 41TB.
Oh, I like that idea much better, and you will too;)

It's good to know you are a Lush instead of an oversensitive crybaby :D:p:);)
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Where the hell is @DrKK when you need him.
I'm here.

For the record, I'm OK with RAID-Z on small vdevs, with small capacity drives, *IF* they are fascistly maintained. I wouldn't even think twice about something like a 3x2TB or 3x3TB in a RAID-Z, and I'd even be guardedly accepting of a 4x{2,3}TB in a RAID-Z. But I'd have a cold spare *ON HAND*, and I'd have ideal temps, ideal everything, and I would never do it with big drives with large numbers of platters.

I'm probably going to build a new FreeNAS pretty soon. I'm probably going to do something like a 6x4TB in RAID-Z2.
 
Status
Not open for further replies.
Top