First build, advice please!

Status
Not open for further replies.

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Hey everyone!

I'm thinking of putting together a NAS, and I'd like some advice. I work in IT, but mainly with smallish windows servers (up to about 30 users), and have never personally configured a raid before. Nor do I have experience with linux or freebsd, so this is a pretty steep learning curve. I'm up to about 25 hours of reading on the topic (including the excellent guides by cyberjock, though I can't claim to have understood them completely yet), and I'd like to present a few thoughts on my requirements and what I'm currently considering. Threw up a poll just for fun, too.

Requirements:
-One home user, with occasions where there might be up to around 3 people accessing the NAS
-Currently have about 6TB on assorted drives that I'd like to consolidate onto a NAS
-The data isn't important enough to back up, but it would be a pain to lose it. It's mostly media I could eventually recover (video, disk images). I'd like a reasonable level of protection though
-I'd like to start with 4x4TB drives, but have the option to add drives over time
-I'd like to store steam games I'm not actively playing on there via symlinks, and wonder if it would be viable (if a little slower to load) to run games. I'd keep anything that's being used frequently on the PC itself, I'm just wondering about the overflow.


Option 1:
Raid z2. My understanding is this will require more robust, server-grade hardware, be more expensive and difficult to set up correctly. It will not be possible for me to just add extra drives (without introducing single points of failure for the whole array). However, protection against file corruption is excellent. Possibly even perfect, if set up in the correct way with ECC components?

With this option, are there any future plans to allow this kind of expansion?

Option 2:
Raid 6. This would allow me to upgrade over time, and be easier to set up and manage. I could also get away with cheaper hardware. The downside would be an increased likelihood of file corruption, however this is likely to only affect single files. Losing a few episodes of...linux distros to flipped bits isn't the end of the world.

If I went with this option, is there any method to convert Raid 6 to Raid z2 without first having to offload the data? That way I could start (and expand) with Raid 6, then convert for the benefits of z2. If this is possible, I'd consider spending the extra money for the hardware of Option 1 so as to allow the option to convert.

Would FreeNAS be appropriate for this option, or are there more appropriate solutions?

On further reading, it seems FreeNAS only handles ZFS, and wouldn't manage a Raid 6. I suppose this also means converting from 6 to z2 isn't going to be possible. What software would people recommend for this option?

Hardware (proposed):
(Option 1)
Case: Silverstone DS380B (loving the compact size)
Motherboard: Supermicro X10SL7-F
PSU: Silverstone SFX 450W mATX 80PLUS
RAM: 16gb Samsung M391B1G73BH0-CK0 (listed as supported)
CPU: i3 4130
USB: 4gb - Unresearched, but I believe I'll want at least 2 just in case?
SSD: I think I need one for cache? Clarification required.
Approximate cost: $1200

(Option 2)
Case: Silverstone DS380B
Motherboard: Undecided, but 8 sata ports and ITX are a must. Suggestions very much appreciated.
PSU: FSP SFX 450W
RAM: 8gb of whatever looks reasonable and reviews okay at the time
CPU: Celeron G1820 or G3420
USB: 4gb - Unresearched, but I believe I'll want at least 2 just in case?
SSD: Not needed?
Approximate cost: $600

Notes/Questions:
I'm leaning towards Option 2

With z2, am I correct in thinking UREs are likely to be caught before they can become a real problem, assuming regular scrubbing?

How will the two systems compare for speed?

Will rebuild times be similar between the two systems?

I think I'd be best off buying drives from multiple retailers/manufacturers over a period of time to avoid drives that are likely to fail together?

Green drives are fine, but need some kind of tweak to work well in a nas?

I can get external 4TB Seagates and rip the drives out for about the same price as a Green, and they have a longer warranty than the greens. Any problems with this idea?

Components must be available in Australia, or able to be shipped without dramatically increasing the price.

Important data is backed up off-site.
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
I have an idea. This may be really dumb, but on the other hand if done slowly and carefully it might be safe. What do you think?

I currently have a number of drives both in-use, and sitting around un-used. This is what I have:

3tb WD Green (in use, smart suggests it's okay)
3tb WD Green (awaiting RMA, purchased at different time to previous drive)
2tb WD Black (awaiting RMA)
2tb Samsung (status unknown, need to run tests)
2tb Seagate (status good)

Seeing as I'm pretty sure I can't increase the number of drives in a z2 array, could I do this?

Test all drives to ensure they are in good condition
Assuming they all pass:

1. Buy 4x 4tb drives
2. Set up raid z2 with those drives, plus the spare green and all 2tb drives. I believe this would give me 12tb of space, 4tb parity, 9tb unused. I should be in a situation where I can lose any 2 drives.
3. Test array to ensure everything is stable, maybe stress it a bit with data that's backed up. Perhaps run it this way for a few weeks
4. Move data from remaining 3tb WD to array
5. Remove Samsung (or Seagate), replace with 3TB WD and perform steps to upgrade HDD. This should give me 12gb space, 4gb parity, 10tb unused
6. Run array like this, replacing the older drives over time with new 4tb drives until I've built it out to 8x4tb.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Let's see, from the top:

You cannot add drives to vdevs as you please. You can, however, join two vdevs in one pool. Say an original 4x4TB RAIDZ2 + a new 4x4TB RAIDZ2. Removing stuff implies destruction of the entire pool (unless you're just replacing individual disks).
RAIDZ2 is RAID6, the difference being that RAIDZ2 is ZFS' responsibility and RAID6 is typically a RAID controller's responsibility. Neither allows for great flexibility in adding or removing drives.

ZFS, in a good scenario (system correctly configured, proper hardware, ECC RAM, UPS backup that is configured, regular scrubs and regular S.M.A.R.T. tests, with e-mail warnings) will pretty much protect you from hardware failure and bitrot.

General consensus is that you need 64GB+ of RAM before you even start thinking of adding SLOGs and L2ARCs. 8GB/16GB is definitely a no-go. Best case you gain nothing, worst case you get a slower system. These can always be added later should testing prove you would benefit from them.

The G1820 should be plenty if you won't be transcoding stuff on the fly. The i3 4130 should be good for one transcode (maybe two) and the Xeons for more than that. As for rebuild times, you'll need an expert's opinion.

You can certainly buy two seperate batches of HDDs, but it's by no means guaranteed to yield any improvements in longevity. This is, of course, assuming you properly stress test your drives before using them "in production".

The Green vs Red debate boils down to this: You'll definitely need to tweak the Greens for server duty (check the appropriate threads), which may (emphasis on the uncertainty there) not be required on the Reds. The only certainty is that you get an extra year of warranty for roughly 20 bucks (Australian proces may be seriously inflated, from what I've heard). Personally, I'd probably buy Reds.

Don't expect to keep the warranty on the drives if you disassemble them. Other than that, they're just regular drives stuck in an enclosure.

As for your idea, it's possible, but I'm unsure of the performance implications of mismatched drives (Green and Red and similar are close enough for it not to matter, but the odd Black make sme hesitate a little).
However, keep this in mind: For optimum performance, you must have either 4, 6 or 10 drives in RAIDZ2. Other numbers will work, but performance may take a serious hit. 6 drives is what I'd typically prefer.
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Hey thanks for the reply! A few follow-ups, if you don't mind?

You can, however, join two vdevs in one pool.
So I've read, but won't having two 4 disk Z2 vdevs give only 16tb of usable space, as opposed to 24tb?

Neither allows for great flexibility in adding or removing drives.
I was under the impression that Raid 6 can accept new drives. Am I mistaken on this?

SLOGs and L2ARCs
I don't actually know what these are yet, but a quick google suggests this is the SSD cache I asked about. Glad to hear I won't need it.

transcoding
No need for that at present, can always throw in something better if my needs change.

Don't expect to keep the warranty on the drives if you disassemble them.
Should be fine, providing I can disassemble them without causing damage to the enclosure. I'll just throw them back in if I have to RMA.

For optimum performance, you must have either 4, 6 or 10 drives in RAIDZ2
This is news to me. A quick google turns up this article, which suggests I'll increase access times fairly linearly by adding drives, but see improvements in I/O and read/write speeds does not apply to ZFS. Do you have any more information on this?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
RAID6 doesn't accept anything. OCE is a feature that is exclusive(but not mutually exclusive) of hardware RAID.

And that link to tom's hardware is NOT about ZFS. So don't be bringing that hardware RAID stuff to this forum. It's not applicable but will confuse would-be readers. ;)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Again from the top:

Pirateguybrush said:
You can, however, join two vdevs in one pool.
So I've read, but won't having two 4 disk Z2 vdevs give only 16tb of usable space, as opposed to 24tb?

Yes, each RAIDZ2 vdev uses two drives' worth of parity, so 4 drives will have a greater parity/data ratio. You get less usable space, but greater resilience against disk failures. That is why the sweet spot is often quoted as 6 drives. Too many drives and suddenly the prospect of losing two (or three, for an absolute nightmare scenario) drives in one vdev becomes much more real.
Since ZFS was designed with Enterprise in mind, features like quick addition/removal of disks without much care are not present. That's why you can't just expand a vdev in most circumstances (you can add mirrors, but nothing else).

Pirateguybrush said:
Neither allows for great flexibility in adding or removing drives.
was under the impression that Raid 6 can accept new drives. Am I mistaken on this?

In practice, most RAID6 implementations are just as limited, or more since you can't quickly group stuff into a single large pool. I'm sure some implemenation out there will allow you to do all sorts of things without requiring the destruction of the array. I'm not sure I'd trust those to work very well...

Pirateguybrush said:
SLOGs and L2ARCs
I don't actually know what these are yet, but a quick google suggests this is the SSD cache I asked about. Glad to hear I won't need it.

SLOGs are drives used for the ZFS intent log (to reduce load on the pool for sync writes). This has to be mirrored for data safety, as it contains data that is yet to be commited to the pool.
L2ARCs are drives used as an additional cache level for reads. Sounds good for all cases, but unfortunately they require an index, which is naturally kept in RAM. That tends to clog things up on low-RAM systems.

Pirateguybrush said:
transcoding
No need for that at present, can always throw in something better if my needs change.

Since you don't need transcoding, the G1820 should do fine, for less than half the price of the i3 and a quarter of the Xeons' price.

Pirateguybrush said:
For optimum performance, you must have either 4, 6 or 10 drives in RAIDZ2
This is news to me. A quick google turns up this article, which suggests I'll increase access times fairly linearly by adding drives, but see improvements in I/O and read/write speeds. That's on Raid 6, but perhaps things are different in Z2, or this article might be wrong, or things have changed since 2007. Do you have any more information on this?

There are two factors at work here:

On one hand, you have a maximum recommended vdev size. This size is 11 or 12, depending on whom you ask. This is due to a number of factors. While not a hard limit, you're advised to stick to it.

Then, there's the fact that ZFS writes in fixed (somewhat-fixed?) chunks. If these are dividable by 4K (the drives' sector size), you don't have a problem and the drives will be operating normally. If not, the drives will have to read/write more individual sectors, impacting performance. If you remember the 512B-4KB transition a few years back, this is similar.
How is this dealt with? There's a formula:

Number of Drives = 2[SUP]n[/SUP]+p

Where n is an arbitrary integer and p is the number of parity drives (1 for RAIDZ1, 2 for RAIDZ2 and 3 for RAIDZ3).

Both conditions, when considered together, yield the following optimum vdev sizes:

RAIDZ1: 3, 5, 9
RAIDZ2: 4, 6, 10
RAIDZ3: 5, 7, 11 (this is why the maximum is the somewhat arbitrary-sounding 11 drives)


I hope this clears it up
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Thanks for clearing up those things. So to summarise and ensure I'm understanding you both:

Raid 6 and Z2 can not be expanded with additional drives. Raid 6 with a hardware controller may offer this ability, but at additional cost, plus the risks of running a hardware raid without a backup controller on hand.

In performance terms, which aspects of performance is an 8 drive array likely to affect? Keeping in mind I'm looking at a single-user scenario and don't see that changing, so there's a hard limit of 100mb/s. Are there any numbers out there from people who have tested this?

If the performance hit is too great to swallow, it sounds like 6 drives would be the best option. If I were to do this, would using the two 3tb greens plus 4x4tb drives be acceptable to start with? If that was in R6, I'd expect to see 12 usable, 6 parity, 4 unused. Would that be the case with Z2, or is it calculated differently?

@cyberjock - Edited my post to avoid misleading people who come later. :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I think there was a thread about atypical vdev sizes and RAIDZ some time ago. Maybe someone remembers it better than I do... In any case, expect random workloads to be especially affected. Sequential workloads tend to "hide" the additional time spent reading/writing the extra sectors. Random workloads will require tons of extra sectors to be rea, though.

A RAIDZ2 with 2*3TB and 4*4TB will give you 12TB useable, 6TB Parity and 4TB unused (In practice, available space will be lower due to ZFS needs like swap. Additionally, do not completely fill your pool, as doing so will cause you a lot of trouble)
After swapping out one 3TB drive for a 4TB drive, you get 12TB useable, 6TB Parity and 5TB unused. Only after replacing the last drive do you get 16TB useable and 8TB Parity.
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Hm. I'll have to see if I can find that thread, or any performance numbers. I'll be mainly using it to store and stream video and disk images (mostly video, but no editing), so most activity would be sequential. The only usage case that would require much random access would be gaming, and I didn't have much hope for that being practical to begin with. If anyone else can help fill me in on this, it would be much appreciated.

Good to know about the space, thanks. As for hardware, it looks like this is where I stand right now:

Case: Silverstone DS380B
Motherboard: Supermicro X10SL7-F
PSU: Silverstone SFX 450W mATX 80PLUS
RAM: 16gb Samsung M391B1G73BH0-CK0 (listed as supported)
CPU: G1820
USB: 4gb

If I decide the performance impact is worth the extra storage, I'm going to need 8 ports. The X10SL7-F looks like a reasonable board, and I can get it for about $290US. That includes about $40 shipping. Is there a cheaper board that would serve my needs (and has reasonable availability)? What I decided to go for 6 ports, would there be a noticeably cheaper option with that path?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Streaming video shouldn't be too impacted, in theory. Gaming is not really feasible in any scenario, many games load textures even during gameplay, which would be unacceptably slow over a network.

I just noticed a seperate problem, though:

The SuperMicro X10SL7-F does not fit inside the case you chose, as it's microATX (largest the DS380 takes is miniITX).

Unfortunately, it's harder to find decent miniITX motherboards. I don't really know much about the current server offerings, either...

If you don't mind something bigger, it becomes much easier to find a good motherboard (the Supermicro is an excelllent choice) and you can get a better PSU for similar amounts of money (I recommend the Seasonic G-series - My Seasonic-designed Corsair PSU is excellent and Seasonic is known for their quality).

If you go with 6 ports, the whole thing becomes easier, since that's the amount of ports provided by an Intel C2xx PCH, lowering price and reducing size. MiniITX will still require some research, though.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You don't have many options for a mini-ITX mobo. Some of the users here and IXSystems new "Mini" are using an Asrock board. Search the forum for details.


Sent from my phone
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Ah, you're correct. Not sure how I missed that. I rather like the look of the DS380. It's compact and has removable drive bays. I already have a hulking great Kandalf, so I'd really prefer something on the smaller side. It looks like the Asrock C2750D4I works with freenas, and I can get that for $400US (including $30 shipping). The CPU is onboard, but if I've interpreted it correctly it should be much faster than anything I was considering. So factoring in the cost of the G1820 for the Supermicro (taking that combo to $350), the Asrock would be an additional $50. Thoughts?

The other option I have would be to go for another case/motherboard combo, which would make the most sense if I opted for 6 drives. I'll try and dedicate some time tomorrow to researching the speed differences and making that decision. If I opted for 6 drives, is there a more economical case/motherboard combo you know of? Or a more appropriate/cheaper ITX board?

Would love some more input on my drive swapping idea from others too, in case I do decide to make the trade and go with 8.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That specific board has a few talking points:
  • It's an Atom processor, so you trade single-threaded performance for multithreaded performance (In other words, the G1820 is faster in most cases, but the Avoton Atoms should be fast enough)
  • The Marvell controllers driving the additional SATA ports are currently not working well with FreeBSD, so waiting is probably recommended until there's a driver fix (hopefully it's as simple as that).
  • Contrary to expectations, it actually supports more RAM than a motherboard based around a Haswell processor (like the G1820). Why? For some reason, Intel processors older than the avotons (including the upcoming Haswell refresh, I guess) have a very weird limitation that keeps them from working with 16GB DIMMs. This was fixed starting with the Avotons. This is a minor concern, typically, since maximum memory should not be a problem for most people at 32GB. 64GB makes sense for people with very large pools and high performance requirements, but those typically go for bigger stuff like E5 Xeons.
  • There's some funky PCI-e switching going on to enable 2 GbE controllers and 6 additional SATA channels provided by two controllers. This may impact performance in some scenarios. Details here.
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
A potential case alternative would be the Fractal Design Define R4. Reasonably priced, 8 bays should I decide to go that way, and not too much bigger. Also allows for microATX.

If I were go down this path, do you have any motherboard recommendations?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
MicroATX makes the choice much easier, since it covers Supermicros X10s. More than 6 drives? X10SL7-F. 6 or fewer drives? Take your pick: There's the X10SLL-F, which is the "basic" model, the X10SLM-F which only adds more USB 3.0 ports (not very useful at the moment in FreeNAS) and upgrades the 3Gb/s SATA ports to 6Gb/s (not very useful for mechanical drives). Then there's the X10SLM+-F, which replaces the cheaper Intel GbE controller with a second Intel i210 (This will almost certainly not make an appreciable difference and I have no idea what makes the i210 better than the other one). All of them, IIRC, have the same
PCI-e layout as the X10SL7-F, but with an extra 4x PCI-e slot (which is routed to the LSI2308 on the X10SL7-F). In practical terms, you can upgrade to something very close (essentially identical) by taking a cheaper one and adding an M1015 (or similar), but this tends to be more expensive.

There are a few more choices, but these are the ones I'm considering for my future build.

Regarding the Asrock C2750D4I, it turns out it's actually the board used in the new FreeNAS Mini by iX Systems, so it carries a certain expectation that it will work.
 

Pirateguybrush

Dabbler
Joined
May 2, 2014
Messages
14
Okay, so it looks like moving to mATX will save at least $150, with the only downside meaning I lose the smaller size and removable bays of the DS380. So this is my current proposal:

Case: Fractal Design Define R4 $145
Motherboard: Supermicro X10SL7-F $310
PSU: Seasonic G Series 360W $105
RAM: 16gb Samsung M391B1G73BH0-CK0 (listed as supported) $240
CPU: G1820 $45
$845AU

If I decide to go for 6 drives, I can drop the motherboard to the X10SLL-F (saving another $90), and the case to the Silverstone TJ08 (saving $30).

Does that look okay?

EDIT:

I've been thinking about it, and the reason I wanted to go for 8 drives was so I'd have the space in years to come (as I don't need nearly that much right now).

However with the way storage gets cheaper per gb over time, building for years into the future probably isn't the best plan.

5TB of storage will store everything I'd want to put on it today - so I could conceivably go for 4x4tb and have 8tb available. I could almost certainly find a way to back up the data before an upgrade, and the addition of just two more drives would take me to 16tb. So I think I'll look at taking that approach. Does this seem sensible?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If you want hot swap bays and don't mind a large case, you can always buy a mid-tower with lots of optical bays and fill it with 4-in-3 or 5-in-3 hotswap drive cages. I make it sound simple, but there aren't many cases that fit the bill, at least for more than one cage.
</random thought>

4*4TB is certainly possible and realistic and your hardware choices look ok. In the future, when larger drives are available, you can slowly replace the individual drives and allow the vdev to resilver. Once they've all been replaced and resilvered, the vdev will automagically increase in size. If you know you will end up buying two more 4TB drives to make a bigger vdev, it's probably worth it to do so now and save the trouble later. Instead of "right now", you can also test and configure your server with 4 disks and later, when you are ready to move it into production, destroy the vdev and make a new one with 6 disks.

Just be sure to stress test all hard drives before putting them into use. A 24-hour memtest86+ run is also recommended.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You could also look at something like 8x2Tb RAIDz2. If you needed to upgrade down the road, rather than having to back everything up and rebuild from scratch, you could replace all the disks (one at a time). After all of them were replaced, with auto-expand, you'd have access to the increased pool size.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You could also look at something like 8x2Tb RAIDz2. If you needed to upgrade down the road, rather than having to back everything up and rebuild from scratch, you could replace all the disks (one at a time). After all of them were replaced, with auto-expand, you'd have access to the increased pool size.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You could also look at something like 8x2Tb RAIDz2. If you needed to upgrade down the road, rather than having to back everything up and rebuild from scratch, you could replace all the disks (one at a time). After all of them were replaced, with auto-expand, you'd have access to the increased pool size.
At that point, I'd rather go with 6*3TB, since it fulfills optimum performance requirements and 3TB drives are currently the best price/performance-wise.
 
Status
Not open for further replies.
Top