Please help me decide what kind of pool to use on home server

Status
Not open for further replies.

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Good evening,
First time posting so be gentle with me. I’ve been using Linux MD in a RAID 6 configuration for my file server for nearly two years. After getting my hands on a ZFS array at work and reading mind numbing amounts of ZFS documentation I’m strongly considering moving my disks over to a FreeNAS box and transforming my current server into an ESXi or Hyper-V host. I still have work to do deciding on a hypervisor but I could really use some help deciding how to set up my home zpools.
The hardware I’ll be running FreeNAS on are:
Intel Xeon E3-1220
Intel S1200BTL Server Board
16GB ECC memory (will upgrade to 32GB when money allows)
LSI SAS9211-8I flashed to, I believe, IT mode (flashed it for pass through can’t remember the exact name)
6x 4TB Seagate NAS disks
1x 4TB Western Digital Purple
My current case accommodates 12 properly mounted disks
Gigabit Cisco switches
10.5 TB of data
I currently use my server primarily for home media. I have a large collection of music, movies, TV shows, etc that I enjoy across my home network and occasionally stream remotely. I imagine myself mirroring two SSDs for a future hypervisor datastore.
I understand that drives will fail. I’m currently backing up my data with a set of external hard drives. Not the best solution, I know, but it’s something! XD
The options I’m toying with for my pool:
  1. Purchasing another 4TB WD Purple and striping four mirrored pairs for 16TB usable space
    I feel this will maximize performance at the cost of extra redundancy and I like easily being able to expand the pool with another pair of drives as I grow. In my current case I would top out at 24TB in one pool. My fear is that with so many pairs I run the risk of getting unlucky with drive failures and losing two out of the same vDev, losing the whole pool
  2. Purchasing another 4TB WD Purple and striping three mirrored pairs in one pool, and eventually another three pairs in a second pool, giving me 12TB in my first pool and starting my second pool with 4TB.
    I feel this will maximizer performance similar to option A and share many of its weaknesses as well. My thinking with this option is reducing the risk of losing everything by separating my data into two pools
  3. Purchasing another 4TB WD Purple and striping two RAIDZ2 vDevs of four disks each giving me 16TB usable
    This option seems safer as it adds an extra layer of redundancy in each pool. I’m not sure how much having RAIDZ2 pairs in the vDevs will affect performance. It would also be awkwardly expensive to purchase four drives and another controller all at once to grow the pool to its maximum 24TB. However, this option seems to be the best mix between safety and performance. It doesn’t seem to put too much data into one vDev so it will theoretically resilver quickly when something fails, but each vDev can take a hit and keep chugging
  4. Placing all seven drives I currently own into a RAIDZ2 for 20TB of usable space maxing at 40TB
    Having a vDev that large is scary as I imagine resilvering would take a long time. I would have to be diligent with manual backups. This option also doesn’t leave me with any real options for expansion other than to tear down the pool and rebuild when it’s time to expand. It would, however, maximize storage space. I do not know if a RAIDZ2 is able to saturate gigabit. Also, I do not know how ZFS does with drives of different brand in the same vDev
  5. Placing all seven drives into a RAIDZ3 for 16TB of usable space maxing at 36TB
    Compensates for a large vDev with an extra parity disc. Similar to option D I wouldn’t have any options for expansion beyond tearing down the pool. I would get a lot of storage space, but potential at the cost of significant performance. Again, this option raises the question of how that WD drive would do in the same vDev as the Seagates.
I’m sorry for the wall of text, but I tried to think things out a bit before asking for help. Your advice is sincerely appreciated. Please feel free to ask any questions to clarify my ramblings here.
Thanks for your time
 
Last edited:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
For clarity:

You are saying you're going to build a FreeNAS as a VM inside of ESXi?

If that's what you're saying, you're on your own. We do not discuss virtualization in the forums, for a number of reasons that you can find by searching the forum.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
All of your options can saturate 1GBe with decent size files. To me the best balance would be 2 6 disk z2 vdevs. That is the best use of space beyond a 12 disk z3 which is on the wide side and not 'optimal' if that even matters (ala compression). Blech.

The next question is how responsive does it need to be? Your IOPs are proportional to the number of vdevs. If you are feeding esxi.... max ram and mirrors are the name of the game. Plus an slog. The cost is space available, as 6 drives will go to parity instead of best case 4.

z3 is slow and likely not necessary. I'd skip the oddball configs and go 2*6 or 6*2. You've had the heads up on virtualizing is bad... so I won't go there.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Thanks for your reply. For clarity I meant to say that I will be separating my storage from my current Ubuntu server, building a second box with the above-mentioned specs, and installing FreeNAS on the new box. The original box will be converted to either a Hyper-V or ESXi host, but that's neither here nor there.

To be clearer, I'm hoping for advice as to how I should set up my pools running FreeNAS on bare metal.

Thanks again
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
All of your options can saturate 1GBe with decent size files. To me the best balance would be 2 6 disk z2 vdevs. That is the best use of space beyond a 12 disk z3 which is on the wide side and not 'optimal' if that even matters (ala compression). Blech.

The next question is how responsive does it need to be? Your IOPs are proportional to the number of vdevs. If you are feeding esxi.... max ram and mirrors are the name of the game. Plus an slog. The cost is space available, as 6 drives will go to parity instead of best case 4.

z3 is slow and likely not necessary. I'd skip the oddball configs and go 2*6 or 6*2. You've had the heads up on virtualizing is bad... so I won't go there.

Thanks for your thoughts. The overwhelming amount of traffic to the FreeNAS server will be writing files like movies, TV shows, and music one a roughly once a week basis, and streaming that media on my LAN with no more than 2-3 users accessing the shares at once.

Reading that sentence again that does make striped RAIDZ2s seem pretty attractive. Due to financial constraints I would probably have to start with 2, 4 disk vDevs and go from there. Or start with a 6 disk RAIDZ2 with my Seagate drives and as money becomes available add the WD drives until I have another six to grow the pool to the way you suggested. I have no special loyalty to one disk brand or another mind you, just the way the cookie has crumbled for me.

I'm currently summarizing your post in my mind as either go 2, 6 disk RAIDZ2 pools or, if I've made peace with losing six drives to parity, go 6, 2 disk mirrors over the 3, 4 disk RAIDZ2 set-up. Please correct me if I've misinterpreted your intent

You specifically mentioned the demands of ESXi on FreeNAS-based datastores. I wonder if SMB3.0 will ever happen on FreeNAS. I remember reading somewhere that Hyper-V 2012 supports SMB3.0 datastores

I noticed you scoffed at compression. If you have a moment could you flesh that out for me a bit? I current reading has been that the default compression has a negligible impact on performance I believe.

Thanks for your advice!
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Nope. Compression is AWESOME. I'm aligned with the folks that believe it pretty much cancels out worrying about an optimal number of disks for a pool. I think the penalties are minimal, but hit them if I can. 6 is a good number for z2 :).

4 disk z2 is gross, imho. The neat thing about the mirrored config is you can add 2 more drives pretty much indefinitely. It also maxes out your potential iops for use as a datastore. For media only I'd be 6 wide z2 with your first 6 disks, and just hold out until I got the extra 5... or start a second pool with intent to transfer those disks when the set of 6 was complete. I have a box like that serving exactly your workload.

For a home esxi box... a couple local ssd's for a fast datastore will crush any san/nas performance that mere mortals can achieve.

I kinda ignore z3 as a slow and ugly duckling. It just doesn't tick any boxes for me. Nor does redoing my pools.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Nope. Compression is AWESOME. I'm aligned with the folks that believe it pretty much cancels out worrying about an optimal number of disks for a pool. I think the penalties are minimal, but hit them if I can. 6 is a good number for z2 :).

4 disk z2 is gross, imho. The neat thing about the mirrored config is you can add 2 more drives pretty much indefinitely. It also maxes out your potential iops for use as a datastore. For media only I'd be 6 wide z2 with your first 6 disks, and just hold out until I got the extra 5... or start a second pool with intent to transfer those disks when the set of 6 was complete. I have a box like that serving exactly your workload.

For a home esxi box... a couple local ssd's for a fast datastore will crush any san/nas performance that mere mortals can achieve.

I kinda ignore z3 as a slow and ugly duckling. It just doesn't tick any boxes for me. Nor does redoing my pools.


Thanks very much for explaining that. Okay so it sounds like for serving media the best way to go is to go the 2, 6 disc Z2 vDevs based on your opinion and experience. Assuming that can saturate gigabit with reasonable satisfaction and I already have a large SSD to use as a local data store for my eventual hypervisor, I think I'll start with the 6 disk pool and work my way up to the second vDev as I can.

I do really like the idea of mirroring pairs for the ability to just add two discs at a time as I need more space, but if I'm not going to see any performance benefit in my use case, then it's hard to spend the money on all the extra parity discs. Even if it would be awesome.

Must buy more disks... XD
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Supposedly, WD Purple would not be as good choice for a NAS, as anything from the same vendor, but from the NAS line
Out of the two folks I've asked about that I've heard opposite replies. I picked up a Purple because that's what the store had at the time and I had a merchandice return window closing. I am strongly considering exchanging it for a Red or a Seagate NAS product but am struggling to find definitive evidence for why I should other than "that's what the manufacturer says." Which I suppose should be good enough for me, but still...XD. Any opinions on this are more than welcome!
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Not sure you'll find that many valid opinions, truth is most of us wouldn't touch em ;). My take is they've tried to incorporate some type of streaming technology they purchased into the firmware, and it supposedly makes decisions about spindle speed, and seek, based on the assumptions of surveillance workloads. I think they are spec'd for low total throughput, intended to run cool, and of course have no vibration compensation or tler.

I'd swap it. Everyday all day. Who cares if it "might work". They aren't cheaper or better in anyway for NAS use that I can see. You pay for extras that don't benefit you. If it was free, I'd run it. :) Greens "don't work" either in some peoples view. Yet with wdidle around here they work well.

All that is worth the price you paid for it, but based on all the information I've seen that wasn't marketing BS.
Disclaimer: I Don't Own.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Not sure you'll find that many valid opinions, truth is most of us wouldn't touch em ;). My take is they've tried to incorporate some type of streaming technology they purchased into the firmware, and it supposedly makes decisions about spindle speed, and seek, based on the assumptions of surveillance workloads. I think they are spec'd for low total throughput, intended to run cool, and of course have no vibration compensation or tler.

I'd swap it. Everyday all day. Who cares if it "might work". They aren't cheaper or better in anyway for NAS use that I can see. You pay for extras that don't benefit you. If it was free, I'd run it. :) Greens "don't work" either in some peoples view. Yet with wdidle around here they work well.

All that is worth the price you paid for it, but based on all the information I've seen that wasn't marketing BS.
Disclaimer: I Don't Own.
I like that plan. Going to swap it for a 4TB Red most likely. I've been happy with my Seagate NAS drives, but the 4TB ones are overpriced at Micro Center and I want to say I've seen benchmarks where the Reds perform better
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
In systems with many spindles I think there is a case to be made for diversity, across both brand, and timeline (batch). Realistically benchmarks on spinny drives have become pretty meaningless. They all push 150+ MBps sequentially, all suck for seek compared to ssd. Heh, I can remember comparing specs for weeks to try and eek out a little extra. Now, I grab whatever is on sale and stick so many of them together that it just doesn't matter. Fun times.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
In systems with many spindles I think there is a case to be made for diversity, across both brand, and timeline (batch). Realistically benchmarks on spinny drives have become pretty meaningless. They all push 150+ MBps sequentially, all suck for seek compared to ssd. Heh, I can remember comparing specs for weeks to try and eek out a little extra. Now, I grab whatever is on sale and stick so many of them together that it just doesn't matter. Fun times.
A vDev of my Seagate NAS drives and a vDev of WD Reds then. In the name of diversity!

Thanks a lot for all your advice in this thread
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Lol. I have been chatty. :) It's always a pleasure to help those that do a little legwork and intend to do it right.

We'll have you addicted to the power in no time ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Diversity? Yes, but going into different lines of brands you are getting the worst of in terms of the performance. Try to get all your disks not to be manufactured in the same week, this is the advice I usually give.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Lol. I have been chatty. :) It's always a pleasure to help those that do a little legwork and intend to do it right.

We'll have you addicted to the power in no time ;)
:D Well, I'm grateful. I'm looking forward to getting some time with the Oracle ZFS array at work.

Diversity? Yes, but going into different lines of brands you are getting the worst of in terms of the performance. Try to get all your disks not to be manufactured in the same week, this is the advice I usually give.
How do you mean by "getting the worst of in terms of...?" Are you suggesting that there is a disadvantage to using two different brands of disks in two separate vDevs? Thanks for explaining
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Your vdevs are only as fast as the slowest device. My suggestion was not about mixing a crap disk with a good one. Rather Seagate NAS and WD Red, or another combination of similar fast drives. jgreco has a few good posts on this, that convinced me it was viable and smart. Traditionally I've always matched things up perfectly on hardware raid.

Solarisguy can elaborate if he has further insight.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
mjws00 summed it up better than I did. Think of the lowest common denominator for all the performance parameters. It might be nothing to worry about, but it is a real factor.

People often point out that when replacing failed disks, out of necessity, different models are substituted, but the idea is that at first the identical model is still available and after let's say 2-3 years, one is getting better disks.
 

Erk1209

Cadet
Joined
Sep 22, 2014
Messages
9
Thanks again, gentlemen. I built a RAIDZ2 with the six Seagate disks the other day but haven't had a chance to wiggle with permissions so Windows and Linux can play nice with the same share. For now, I have two 4TB WD Reds that I may mirror and pair with an SSD to play around with as storage for ESXi or Hyper-V but won't be doing anything mission critical with that vDev for now. Game plan is to eventually build a six disk Z2 with the Reds when I need the storage. May or may not stripe them with the Seagates, we'll have to see. I imagine it's best to, though.

Really enjoying FreeNAS so far, thinking of throwing $300 down on the first training class...
 
Status
Not open for further replies.
Top