BUILD hardware check: 36TB NAS build

Status
Not open for further replies.

koifish59

Dabbler
Joined
Apr 19, 2015
Messages
39
I'll be purchasing all hardware for my first FreeNAS build soon. This will be for a small business, and I'll be running just a few VMs in the jail. Below is the list of hardware that I've selected. I'll be using RAID-Z2. Any critique would be wonderful. All my hardware purchases will be through Newegg or Amazon:

CPU: Intel E3-1271 v3 - 3.6ghz quad core /w hyperthreading
motherboard: ASRock E3C224D4I-14S - socket 1150
memory: 4x 8gb ECC ddr3 1600mhz samsung M391B1G73QH0-YK0
disks: 6x 6TB Western Digital Red Pro WD6001FFWX
SSD: (old) Intel 80gb
case: Lian Li PC-Q26A
PSU: Seasonic 660w Platinum - SS-660XP2


Questions:
1) If I go with raid Z2, then I'd get 24TB of usable space. With 6 disks (4 usable), will I be able to add another 4 identical disks later to double my capacity to 48TB? I'm not sure if adding additional equal-sized disks to the same pool is possible. Are there alternatives, or would I have to get all disks from the start?

2) I'm assuming 32gb of ECC memory is sufficient to cache 24tb of disk, but is not enough for 48TB of disks?

3) How much of the CPU does the system actually use? I'm assuming the CPU is way overkill. I'll be running just a few light VMs as well. What has the most impact on CPU usage in a FreeNAS box?

4) What kind of throughput can I expect for transfer across the network? I have all gigabit switches right now. My current Synology NAS is only getting ~110mb/s because I'm assuming the disk's read/write rate is the bottleneck.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
1. Your usable will be less than 4 disks worth due to the fact that you shouldn't fill to more than 80% and there is ZFS overhead. bidule0hm has a handy calculator. You can't just add data disks. You can add a whole new vdev (6 disk RAID-Z2, or other size/config) to the pool to expand in the future.

2. Should be good for both.

3. Not much. You will have a lot of headroom. Plugins are probably the biggest users of CPU. CIFS can take a decent bit of CPU while transferring. And replication (depending on the encryption cipher) will use a bit.

4. Max throughput would be 4 (number of data disks) times the max throughput of 1 drive.

I'd also suggest looking into the Digital red pro vs something else. It might be overkill for this setup.
 

koifish59

Dabbler
Joined
Apr 19, 2015
Messages
39
Thank you depasseg, that was very helpful. So your answers will extend my questions:

1) So it's best to just get all disks from the start. That means buying 10 disk in raid Z2 for 8 usable disks. That sounds like a more elegant solution to managing two separate vdev. However adding more disks is also increasing the chances of a single disk failure, so now the risk is equivalent to 5 disks in raid Z1?

Why are the new WD Red Pro drives overkill? I did have a similar question when I started THIS THREAD because Red PROs cost a good margin more than the original WD Red drives.

edit: nm, you answered that Red Pro drive in the thread, basically saying higher power and heat.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The single disk failure really isn't as risky as the multiple disk failure. This is more likely with larger drives that take longer to resilver. To reduce this risk you could use RAID-Z3 (7 data drives), or stick with 2 5-disk RAID-Z2 vdevs. Or have another system that you can replicate to. Or have a cold spare ready to insert.

more vdevs=more IOPS (10 disk RAID-Z2 will have the IOPS of 1 disk)
more data drives = more bandwidth (10 disk RAID-Z2 will have the bandwidth of 8 disks)
less redundancy = greater risk of pool loss

It's a risk analysis only you can do.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Added what's in bold:
4. Max throughput would be 4 (number of data disks) times the max throughput of 1 drive, but you'll be limited by the network speed so about 1 Gb/s or 125 MB/s


And I agree for the Pro/Red thing ;)

However adding more disks is also increasing the chances of a single disk failure, so now the risk is equivalent to 5 disks in raid Z1?

Yes and no. Yes you'll increase the chances of failure but no it's not equivalent, it's still far better than a RAID-Z1 made from 5x 6 TB drives. If you want the exact numbers the MTTDL for the 10x 6 TB RAID-Z2 is 5.9 x 10^9 hours and the MTTDL for the 5x 6 TB RAID-Z1 is 1.0 x 10^8 hours so the RAID-Z2 is more than 15 times safer. And that doesn't include the risk of an URE during a resilver so RAID-Z1 is definitely far worse here.

Edit: and 10x 6 TB RAID-Z3 MTTDL is 3.1 x 10^12 hours so more than 500 times safer than the RAID-Z2.
 
Last edited:

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310

koifish59

Dabbler
Joined
Apr 19, 2015
Messages
39
Thanks all for the comments. I'm really considering going with 11 disk, RAID-Z3 now. How much of a performance hit would it be from Z2 to Z3?

Lastly, that Asrock motherboard I want to get officially states that it supports 32gb of memory (specs sheet here), however I now want to go with 64gb (16gb per stick). So is 32gb a recommended limit, or is that a hard limit?


PS - I will be running a handful of lightweight VMs (such for Proxy, VPN, etc), and in addition some ESXi datastore volumes will be on this NAS. I think I'll be ok without a L2ARC since I'm upgrading to 64gb of memory.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
Pretty sure the mobo RAM limit is a hard limit.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Thanks all for the comments. I'm really considering going with 11 disk, RAID-Z3 now. How much of a performance hit would it be from Z2 to Z3?

Lastly, that Asrock motherboard I want to get officially states that it supports 32gb of memory (specs sheet here), however I now want to go with 64gb (16gb per stick). So is 32gb a recommended limit, or is that a hard limit?


PS - I will be running a handful of lightweight VMs (such for Proxy, VPN, etc), and in addition some ESXi datastore volumes will be on this NAS. I think I'll be ok without a L2ARC since I'm upgrading to 64gb of memory.
32gb is a hard limit.
 

koifish59

Dabbler
Joined
Apr 19, 2015
Messages
39
That's a real bummer. I really wanted to move up to 64gb from 32gb too so I wouldn't have to use any L2ARC.

With 10x 6TB drives, RAID-Z2, running some VMs in jails, ESXi datastore, drive encryption, and CIFS transfer protocol, would I be ok with the 32gb memory limitation, or would I have to plan for a new platform?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Since you haven't bought anything yet, price out an lga2011 system. When you consider this will be something in service for years, it's a small price to pay for a heck of a lot of headroom for expansion.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
I would get a X10 2011 socket board and put 64GB RAM on it to start with. How many VMs are you running?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Even X9 series stuff works well. I picked up a 36-bay 4U Supermicro chassis off eBay for $600, with an X9DRi-LN4F+ mobo inside. Two E5-2670 processors later and some RAM, and I have a very capable 16C/32T @ 2.6/3.3GHz system with 128GB RAM. Less drives, I've got under $1500 in the entire thing.
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
Even X9 series stuff works well. I picked up a 36-bay 4U Supermicro chassis off eBay for $600, with an X9DRi-LN4F+ mobo inside. Two E5-2670 processors later and some RAM, and I have a very capable 16C/32T @ 2.6/3.3GHz system with 128GB RAM. Less drives, I've got under $1500 in the entire thing.

Wow there are some sick deals for that mobo and CPU . I have been looking for another system and this might be it.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Wow there are some sick deals for that mobo and CPU . I have been looking for another system and this might be it.

Yep... and you should be able to build a system powerful enough that, save doing something absolutely ludicrous (a zillion jails, or numerous 10GbE network drops), you'll never run out of juice. The built-in quad Intel NICs are nice too... I'm going to do one LACP pair to serve the clients (CIFS, primarily) and another LACP pair to serve the VM farm (NFS).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I really wanted to move up to 64gb from 32gb too so I wouldn't have to use any L2ARC
Really, you shouldn't attempt an L2ARC unless you have at least 64 GB of RAM.
 

koifish59

Dabbler
Joined
Apr 19, 2015
Messages
39
I guess the best medium solution between a LGA1150 (32gb max) and LGA2011 X10 platform (higher cost, and more performance that I won't be needing), would be to wait for the socket 1151 server motherboards and CPU to be available. This would give me the perfect 64gb max memory that I'll need, while running on a newer but cheaper platform than a X10 platform.

First desktop 1151 processor and motherboard just launched a few months ago. Any ideas how long it takes for companies to come out with the server variant? If it takes too long, I may just settle with my initial s1150 build.
 
Status
Not open for further replies.
Top