Planning storage configuration for a new build

Status
Not open for further replies.

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
I'm in the process of my first FreeNAS build, and I'm about ready to order drives, but I want to figure out the configuration I want before I drop a bunch of money onto the most expensive part of the project. I'm currently a college student paying my way through out of pocket, so I don't exactly have the budget to go all out on 6x4TB enterprise-grade drives, but I also understand that I can't just add a drive at a time into a ZFS setup, so some planning at this stage is probably called for. My current build is based around the ASUS C60M1-I with 16GB RAM in a chassis that supports up to 6 drives.

From the looks of things, RAIDZ2 would likely be the best choice, but as I said, money is a bit tight, and 6 drives is probably out of the question right now. So what I was thinking was to go with 3 drives in a RAIDZ configuration, then later I could add the additional 3 drives as a separate vdev. If I understand correctly, this would also later allow me to expand each vdev independently by replacing 3 drives, rather than having to replace all 6 like I would have to in a 6-drive RAIDZ2 configuration. The obvious downside to this is that losing 2 drives in the same vdev would kill the whole pool, but therein lies the eternal struggle between capacity, reliability, and cost. Personally, where I'm at right now, I'm okay with single failure protection, as I'll probably be stuck with the 3 initial drives for the forseeable future anyway, and even that is at least more failure protection than I have right now on my laptop and scattered collection of external USB hard drives. This NAS is primarily going to be used as a backup server for my existing data, and it's just for me, so it's not going to see a lot of heavy use.

I understand that drive configuration is a subjective topic, but I'm a bit overwhelmed with all of the information I've read as I've been getting into this project, so I just figured I'd ask for some feedback. Is my current plan any good, or is it guaranteed to blow up in my face? I'd appreciate any comments, suggestions, criticisms, etc. I think right now my drive budget is about $350, so I'm currently looking at 3x2TB WD Red drives, which are currently running $110 at Tiger Direct, and then configuring them as RAIDZ like I mentioned earlier.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You dont need enterprise class drives.. those 4TB drive that are $160 or so.. the consumer ones.. are just fine!

But, that motherboard.. that's gonna be a fail and a half. That CPU is wholly unacceptable for ZFS processing.

That thing is literally like 1/8th of our cost effective recommended CPU. No joke.. I looked it up. That is, assuming that your CPU will even boot FreeNAS and that your motherboard hardware works.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We don't really suggest enterprise-grade drives. (Desktop, yes, I prefer to avoid the Greens though.)

But we also don't really recommend the C60M1-I either. It lacks ECC and it sports the contemptible Realtek 8111 ethernet, plus it only supports 8GB. It's a board that wasn't well-liked even several years ago IIRC.

The sad reality is that it is hard to do FreeNAS cheaply. The cheapest components are simply not a good choice. The Realtek ethernets suffer from a variety of issues that make people sad/angry/furious. ZFS has data protection that is designed around the assumption that the host has ECC and won't be corrupting the data in core. The core speeds on that APU are very low and won't be pleasing with the computationally heavy ZFS RAIDZ calculations. The only thing that board has going for it are the SATA3 ports. I don't have any happy answers for you. Most of the worthwhile solutions for system board and CPU seem to end up running at least $200.

As for the pool layout, well, given that you could always back your data up off the NAS back onto all those externals, you might keep your eyes open for sales on the Barracuda 4TB's, which occasionally go on sale for $139-$149, and pick up a pair of those and mirror them. The upside there is that you could then later pick up two more, nuke the pool, and make it into a RAIDZ2 giving you 8TB of usable space. Or you could pick up four more giving you 16TB of space. Getting smaller drives now "feels" okay right up to the point where you end up wanting to do something bigger, at which point you've wasted your money on small drives. Better to buy the big ones and find a way to make it work. My opinion at least.
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
Well, the hardware is purchased other than the drives, so this may be a live-and-learn experience, but for now, it's what I have to work with. As for the motherboard, all I can say is I've seen more than one person use it and it seemed to be well liked for its price point, but I have no illusions about running anything like encryption or dedup. The board does actually support 16GB RAM, I saw a few posts implying as much, so I decided to try it out, and sure enough it does. You're right about ECC though...

My understanding of the realtek NIC issues were due to the lack of real driver support until the latest FreeBSD release, but we'll see how it goes.

My comment about enterprise-grade drives was hyperbolic in nature, I know the desktop-grade are sufficient, provided they don't implement some of the more aggressive power-saving measures (i.e. WD Green, etc.).

So I guess, at this point I'm still just back to looking for advice on drive configuration.

Sent from my Galaxy Nexus using Tapatalk
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
Well... I've done some more reading, and you've convinced me to reconsider the C60M1-I, mostly on account of the non-ECC RAM. Wish I'd asked around more before buying, but I'm sure I can find another use for that board down the road. Now, to find another board that fits my specs without breaking the bank. The ASRock C2550D4I looks promising...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And is probably a good choice. The 2750 is nicer but pricey and if you don't need the cores then also a waste.
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
I don't plan to need the extra processing power, I won't be doing anything like Plex. Basically, I'm planning on using this as straight-up storage for system and data backups on my main machines, not so much serving that data back to my machines (though I'll be using BTSync, so this box will pretty much act as the master hub, ensuring data gets synced between computers even if they aren't all online at the same time), which is why I was hoping to get away with the underpowered hardware, but considering this will be my main backup storage, the idea of the whole pool going up in flames due to bit rot in RAM is just not something I'm willing to risk. So now I'm basically looking for the cheapest board that supports ECC RAM in a Mini-ITX form factor with 6 SATA ports (though a PCIe SATA card would work too, if that ended up being cheaper...), and the C2550D4I seems to fit that bill. I really don't care much about performance, but yeah, data integrity's too crucial to gamble with, considering that's the entire purpose of this whole box.
 

costre

Cadet
Joined
Apr 14, 2014
Messages
5
We don't really suggest enterprise-grade drives. (Desktop, yes, I prefer to avoid the Greens though.)

"The greens?" WD Greens? Why are they a bad choice?
I have a retailer ready to provide me with several 4TB WD Greens. Should I back out?

Yes, it's my first post :) I will go more deeply into my build in a while
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Search the forum for a thread about Red vs Green.

No duct tape required. :smile:


Sent from my phone
 

costre

Cadet
Joined
Apr 14, 2014
Messages
5
I will look into that. I am also looking into this modification of different setting of the hard drives... I have never had the need to tinker with such things before :)
I am about to pour quite a lot of money and energy into this setup, and I hope I will be prepared..!
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
"The greens?" WD Greens? Why are they a bad choice?
I have a retailer ready to provide me with several 4TB WD Greens. Should I back out?


From what I understand, the WD Green drives can work great in a FreeNAS system if some of the more aggressive power-saving features are disabled (or at least reconfigured) using the firmware tool wdidle.exe. However, some newer Greens have removed the ability to run these tweaks, leading some people to speculate that the WD Reds may just be re-labeled Greens with a slightly different firmware (though there's no way to definitively confirm that).

...that's just my understanding from reading around, anybody feel free to correct me if I'm wrong. I do know that people do still recommend the Greens, but with the default power settings, there can be issues ranging from basic performance degradation to actually having drives dropped out of the array due to overly aggressive power saving causing the drives to not respond quickly enough. My current build is 2x2TB Reds + 1x2TB Green, since I had a Green lying around with very little use (I intend to replace it soon, but I don't have the money right now, so I've kind of planned out a 3-stage incremental upgrade from this initial 3x2TB RAIDZ1 up to my eventual goal of 6x4TB RAIDZ2).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
jgreco's opinion is valid. (I have 24 Greens too).

Greens require more than "drop them in" work to get them working for your server well. If the cost is worth a minute's worth of extra work, check out the How-To guide in the forums that I wrote on how to make Greens work well for your server. They *can* work well if you do it right. If you just drop them in you'll be disappointed.
 

costre

Cadet
Joined
Apr 14, 2014
Messages
5
Is it a performance question, or can you actually mess things up properly due to these poor response times?
Good thing to hear it's possible to set them up correctly though.

Is there a thread where one can discuss hardware vendors? I have been searching for a while and come up with good prices from a few places.
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
If the drive doesn't respond, the array may think it's gone bad and drop it entirely from the pool, and you wouldn't really know if it was *really* going bad or just unresponsive due to power saving measures. wdidle fixes that particular issue for the most part, from what I've read. Listen to cyberjock, he has way more experience than I do.

Sent from my Galaxy Nexus using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's nothing like that qwertymodo. It had nothing to do with performance. It has to do with excessive parking of the heads leading to premature failure of the disk.
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
I've heard of people having stock-configured Greens dropped out of their arrays prematurely while still being perfectly good... maybe that has more to do with TLER than intellipark. But in any case, Greens + wdidle still seem like a perfectly solid option, and if anything I said seemed to imply otherwise, my bad. wdidle does make a significant difference though.

Sent from my Galaxy Nexus using Tapatalk
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah.. that would be a TLER problem for sure. And that's not a problem for Green drives, that's a problem for basically all consumer drives.
 

costre

Cadet
Joined
Apr 14, 2014
Messages
5
That makes sense, constantly parking and activating the mechanism would make for unnecessary wear. Of course a sacrifice for lower power consumption...
My thought is to build a 24 drive setup, and 4TB WD Greens for 123$ a pop seems very affordable. To hell with power bills, as long as the nas is healthy :)
 

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
Personally, I chose to set the idle timer to the max (300s), rather than disabling it entirely, but that's a matter of preference based on your use case. Mine is going to be sporadic "bursty" usage with lots of idle time inbetween, so I still want my drives to idle, just not after 8 seconds :P
 

costre

Cadet
Joined
Apr 14, 2014
Messages
5
My array will be probably be used in bursts for 90% of the time, and pretty much continuosly the remaning 10%.
Your idle time settings makes sense, but I'm sure I will be fiddling a bit on my own.
 
Status
Not open for further replies.
Top