Building a 24 drive Freenas Rig

Status
Not open for further replies.

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
I have the worst type of personality when it comes to anything to do with technology. I am obsessed with making sure that I am doing something the right way and or the best way. For years now I have been quite happy with a synology raid that I purchased and set up to house my data. I pretty much archive almost any file I've ever needed and or used. My synology was a 5 bay with 3tb drives in a raid 5. Synology has breakout expansion boxes that I always anticipated adding additional drives this way as I am already out of space.

Recently I decided to build a second raid to act as an off-site backup as I know a raid is not a backup.. I have plans to setup a point to point wireless network between my home and a relative's home roughly 1000 feet away. Then I would place the second server at the relative's home and setup an rsync backup. It was the idea of creating a second raid that got me making decisions on whether or not I wanted to spend the money on another synology for this purpose. As it is I would have to spend an additional $1000 on breakout boxes for my existing synology (without drives) and then the backup server would also be $2000 without drives just to ensure I had the expandability in both my main raid and my backup raid.

So as I started investigating a diy nas I stumbled into the world (that I previously didn't even realize existed) of countless software solutions for a diy nas / raid system. (flexraid, unraid, snapraid, freenas, nas4free, zfsguru etc etc)

The bulk of my data is media. Reading through each of the listed softwares above I found myself weighing the pros and cons of each. Every time I would read something about ZFS and freenas, I'd get confused with the new terminology I had never heard before vdev, zpool, etc etc. This led me towards perceived easier software (flexraid mostly)

As I was saying earlier about my personality. Another aspect to it is that I MUST have my mind made up with regards to a pending project. All aspects must be agreed upon in my mind and bookmarked for purchasing. The choosing of a software for my raid borderline caused me mental anguish for four days. I couldn't read enough about the software that I was contemplating.

This is where I got into trouble. I've been building computers for over ten years now. I've never heard of bit rot. When I stumbled across bit rot, I was terrified. Here I was thinking my data was safe and sound in its raid. Here I thought the only thing that could happen to my data was a hard drive die or theft or fire. This was why I thought my backup solution would make my data bulletproof. The second I discovered the mere existence of bit rot....My decision was made. ZFS and freenas. I'm aware that bit rot may indeed be rare, but I have so many files, family photos etc, terabytes worth. Ten years from now I do not want to go open a file and find it corrupt and also corrupt on the backup simply because I had not accessed it in awhile.

So here I am, happy that I have decided on ZFS and freenas. Next thing for my to agonize over... hardware.

So my plan is to have a 24 drive system. The norco 4224 case seems like an easy "go-to" but I'm trying to convince myself to decide on quality and go with a supermicro 24 bay case.

4tb drives all around.

IBM m1015 cards obvious choice.

server motherboard with 32 gigs of ecc ram.

The two things that I am unsure of (and thus writing this post) are with regards to how many sas cards / connections to the motherboard for drives, and what raidz(?) configurations to decide on.

I like consistency and the thought of popping three imb m1015 cards in a board to handle the drives seemed like what would calm my ocd the best. I figured find a motherboard with a 16x pci, 8x pci and a 4x pci.

Then I found mixed reviews on the ibm m1015 in a 4x slot. I've seen some say it works and others say it doesn't.

I tried to find a motherboard with at least three 8x pci slots. I found a few, but they were all dual cpu boards.

Then I found this board....

http://www.newegg.com/Product/Product.aspx?Item=N82E16813151247&Tpk=S5512WGM2NR

It has a 16x and an 8x board which will support two ibm m1015s as well as an onboard LSI 2008 SAS

I think it would work to use two m1015's and then the onboard lsi to accomplish what I want. Input here is highly appreciated.

Now with regards to the raidz configuration.

Four 6 drive raidz2 ? Is this likely the best bet with regards to redundancy and storage space?

I've seen tips like this from the wiki...

  • Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
  • Start a double-parity RAIDZ (raidz2) configuration at 5 disks (3+2)
  • Start a triple-parity RAIDZ (raidz3) configuration at 8 disks (5+3)
  • (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 8
as well as this...
3-disk RAID-Z = 128KiB / 2 = 64KiB = good
4-disk RAID-Z = 128KiB / 3 = ~43KiB = BAD!
5-disk RAID-Z = 128KiB / 4 = 32KiB = good
9-disk RAID-Z = 128KiB / 8 = 16KiB = good
4-disk RAID-Z2 = 128KiB / 2 = 64KiB = good
5-disk RAID-Z2 = 128KiB / 3 = ~43KiB = BAD!
6-disk RAID-Z2 = 128KiB / 4 = 32KiB = good
10-disk RAID-Z2 = 128KiB / 8 = 16KiB = good
And I've seen it suggested that a single raidz anything should not exceed 9 drives.
Any tips here would be greatly appreciated.
Thanks for the help guys.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
24 drive system are always fun to choose the config of the drives.

4 vdevs of 6 drives in z2 would be a good option. Good redundancy with 2 out of every 6 drives. The 4 vdevs should give you good speed as well. And 6 drive z2's are 'optimal' from the 128k stripe standpoint. This gives you the capacity of 16 drives.

You could also do 3 vdevs of 8 drives in z2. Slightly less redundacy as it's now 2 out of every 8 drives. Slightly less potential speed with 3 vdevs. Also 'non' optimal from the 128k stripe standpoint, but that probably won't make much of a difference. That gives you the capacity of 18 drives.

Or, 2 vdevs of 11 drives in z3. Slightly better redundancy than the 8 drive z2 above. 'optimal' for 128k stripes. Gives 16 drive capacity out of 22 total drives. Leaves 2 slots open for spares, or swapping drives around. Random performance will be the lowest, but depending on workload, probably not an issue.

If it were me, and it was for home use, I'd probably do the 3 vdevs of 8 drives in z2. If 16 drive capacity is enough, the 4 vdevs of 6 drives in z2 would also be good.

As far as bitrot, yes it's definitely real. I've had a 10/11 drive z3 pool of 3tb drives for almost a year. In that time, I've had drives return 'bad' data twice. The drives didn't tell the OS the data was bad though. It was ZFS that knew the drive was returning bad data. In the case of hardware raid, unless the drive knows it's returning bad data, the raid controller probably won't help you. If the raid controller knows the drive can't read a particular sector it should use parity to recompute it. If the drive doesn't know the data is bad, the raid controller probably won't know either.

If you're going to have potentially ~64tb of data on the zpool, I'd seriously look at a socket 2011 xeon and corresponding board so that you're not limited to 32 gigs of ram. 32 may end up working fine, but if it ends up not being enough, it's more expensive to fix with socket 1155/1150.
 

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
Thanks for taking the time to reply titan.

I was unaware that the "non" optimal 128k stripe wouldn't be a huge issue. I would not be opposed to going the route of 3 vdevs of 8 drives in raidz2 as I always plan to have spares next to the rig and will immediately go about replacing a failed drive, and secondly I still plan on building a wireless connection to the family member's house and eventually building a second backup server. (money for the second rig will take time though) I like the idea of getting as much capacity vs redundancy as possible so if non optimal 128k wouldn't affect me much, 3 vdevs would probably work best.

I am not opposed to going the xeon route by any means. I had found myself looking at 1155's simply because that is what is so commonly recommended for these builds. I'll have to look and see if there are any single processor 2011 boards that accept more than 32 gigs of ram. If not a dual processor setup is what I'll end up with is my guess?

Thanks again!
 

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
Initial investigating is yielding multiple dual processor boards. Curiously enough though these boards also have significantly more available pcie 8x slots. I would then be able to use all ibm m1015 cards.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
There should be lots of supermicro single proc 2011 boards that will run >32 gigs. I'm not too familiar with the server stuff. jgreco is kind of who I'd look to for MB advice. I'm pretty sure you can get a 2011 single proc board that'll run at least 128 gigs of ram. I'd probably start at 32 gigs, so that you can simply plug in more ram if needed.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
You can also use the M1015 with a SAS expander. I do 24 drives on an M1015 and http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html

I was considering doing that. I had read a few comments on how people avoid sas expanders with freenas to avoid potential complications. I know very little about sas expanders myself. Are there any potential downsides to using one with freenas? The one you linked me to says it still does 6gbps. How is that possible?

Thanks guys.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I was considering doing that. I had read a few comments on how people avoid sas expanders with freenas to avoid potential complications. I know very little about sas expanders myself. Are there any potential downsides to using one with freenas? The one you linked me to says it still does 6gbps. How is that possible?

Thanks guys.

What do you mean how is that possible? It uses 6Gbps links. Theoretically, since you will be using 4x6Gbps links, it is theoretically possible to saturate the full 24Gbps if each drive transfers at 1Gbps. But your bottleneck will certainly be something else that is much slower. More than likely, your NIC.
 

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
I meant with regards to the sas cable connections to the m1015. I figured if a single sas cable could connect four sata drives to the m1015 at 6gbs each then connecting the single sas cable from the m1015 to the expander card would yield a total speed for the sas card of 6gbps for example. 24 drives sharing a single 6gbps sas cable back to the m1015 was how I was visualizing it.

Sorry if it comes off as a dumb question.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You'll be using a cable that has 4 links, so you'll have 6Gbps x 4 links, for a total of 24Gbps to/from the SAS expander.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I don't know how that particular port expander works, but if you have a single sff-8087 port that gets split out to 24 individual drives, then cyberjocks math is right. In the sff-8087 connector you have 4 links, each 6 gig. This gives a total bandwidth of 24 gig. If that's being shared amongst 24 drives, that gives each drive 1 gig each. Depending on the drives, that might slightly slow the drives speed, but not by much. You'd still get about 100 MB/sec from each drive. Regular 7200 rpm sata drives typicaly max out at about 150 anyway. And WD Green / Red drives are probably only about 100 or so anyway. It would only typically be a problem for SSD's, as they can usually use all 600 MB/sec. I'd simply plug in any ssd to a motherboard sata6 port instead of going through a port expander.

The thing to stay away from is sata port multipliers. Those can be flaky and not well supported at times. SAS port expanders generally work quite well from what I gather.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ugh. I despise composing a longish response and then having the web browser crash.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ok well to make a long story short: 32GB/1155 is quite possibly fine for home use even if you have 96TB of raw space; the 1GB-per-TB rule of thumb has more to do with people trying to operate 100TB busy departmental fileservers on 16GB of RAM or other seriously not-realistic stuff. Might consider 48GB/1366.

2011 ought to go up to 128GB or 256GB easily, but at a price premium(!!!) and a speed loss. The low end E3-1230 Sandy's here are inexpensive (CPU ~$200) and do well on benchmarks (10000-13000 on Geekbench 2) whereas with E5 you have to get something like an E5-2630 to get that sort of score, around $600 (3x the price). Worse, the core speeds on the E5's tend to be lower; that E3-1230 is a 3.2GHz part that turbos to 3.5; to beat that core speed on E5 you need to go with something like the E5-2643, which sells for $800. And the E5 board is more expensive too.

For true E5 fun, I've got an E5-2697 on order to stick in my poor little laggy X9DR7-TF+ system, which is right now only sporting an E5-2609 that scores 6500 on Geekbench. Sadly, the cheap E3-1230's will still have faster cores than even a $2700 top of the line CPU.

As for SAS expanders, for heaven's sake, if you go that route, get the Supermicro 24 drive chassis with the SAS expander right on the backplane. A single SFF-8087 to attach all your drives to your controller.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
As for SAS expanders, for heaven's sake, if you go that route, get the Supermicro 24 drive chassis with the SAS expander right on the backplane. A single SFF-8087 to attach all your drives to your controller.

Yes, that is a very good recommendation.

Also, my math was wrong. The SAS expander has 24 ports(6 sets of 4). One of those will be used to connect to the M1015. So in my configuration I have 4 drives connected directly to the M1015 and 20 drives on the SAS expander.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
All the ATX form factor Supermicro X9S* boards will take up to 256GB. It isn't practical to do so because the DIMMs are etched out of platinum plated gold and are using silicon specially made from fairy dust and sand from Mars, so the prices for the 32GB modules are ridiculous, and the ones that are validated are all like DDR3-1066 anyways. The Supermicro-validated M393B4G70BM0-CF8 goes for about $800. So about $6500 for 256GB.

But for 16GB modules, the M393B2G70BH0-CK0 (DDR3-1600) is about $125, so 128GB of RAM is only $1000 that way.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"cyberjock likes this."

Well I don't, man, I want to cram three quarters of a terabyte of memory and another 12 cores in that box but I just can't justify twenty five thousand dollars on processors and memory for a single host.
 

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
So much information haha. I have to let this all absorb. I understand perfectly that I have to decide that if I want to make the jump to 2011 I'm going to have to spend significantly more money for a comparable cpu. I'll have to make the decision on how much ram I'll need. This will not be a database server as it will just be streaming hd content. I'd say hd content to 3 devices simultaneously at most. Is the only danger of running a non-optimal raidz 128k stripe simply a performance hit? And a slight one at that?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ok well for the sake of discussion, a 4-drive RAIDZ2 box here with a E3-1230 and 8GB of RAM assigned reads a full disc ISO at 185MB/sec. If I then try three other ISO's simultaneously that drops to 57, 62, and 74MB/sec. Doing 13 simultaneous ISO's saw speeds from 9.7 to 22MB/sec, and aggregated together that is still 178MB/sec.

So if we were to assume a high bitrate like 25Mbit/s, that's about 4MBytes/sec, for four devices simultaneously that would be 16MBytes/sec. That's an order of magnitude difference.

I'm just going to go out on a limb here and say that you'd have to really try hard in order to screw this up. I'll see if I can do some experiments on my bench box (30TB) later.
 

trsupernothing

Explorer
Joined
Sep 5, 2013
Messages
65
I'm just going to go out on a limb here and say that you'd have to really try hard in order to screw this up. I'll see if I can do some experiments on my bench box (30TB) later.

Haha you'd be surprised what I can mess up. I appreciate your speed experiments on your boxes. I want to do this as properly as possible.
 
Status
Not open for further replies.
Top