1st storage server - FreeNAS!

Status
Not open for further replies.

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Hey guys :D

This is a hardware selection & build thread for my new storage server! I have never built a storage server before so this is new territory for me. Done a couple customised watercooled PCs but thats about it :) shouldn't be too hard, right?!

Anyway, thanks to all the contributions from members of this forum I feel comfortable with my hardware selection. Of course, comments/suggestions/criticisms are most welcome :cool: I am hoping this thread can be an active thread where no question is stupid and we are all here to contribute. I certainly know that I will have quite a few questions along this journey ;)

Now, this server is primarily for storing all my BluRay movie backups, music, photos (avid photographer) and anything else that might need to be backed up. Currently I have a heap of WD 6TB external hdd's which gets very confusing & frustrating to say the least :(

Hardware components

Type Product Quantity

Mainboard Supermicro X10SRL-F 1

CPU Intel Xeon E5-1650 v3 1

Memory Samsung M393A40BB0-CPB 32GB RDIMM DDR4 1.2v – 2133 4

HBA IBM ServeRaid M1015 SAS/SATA Controller 1 (5 spares)

HDD Western Digital Red 8TB 24

Data Cables Supermicro CBL-0281L mini-SAS (SFF-8087) to mini-SAS (SFF-8087) 1 (2 spares)

NIC Intel Ethernet converged network adapter X540-T1 1 (+2 for desktop and HTPC)

Switch Netgear ProSafe XS708E 1

Chassis Supermicro CSE-846E16-1200B 1

Backplane Supermicro BPN-SAS2-846EL1 1

PSU Supermicro PWS-920P-SQ 2

CPU cooler Noctua NH-U9DX-i4 1

Rear chassis fans Noctua NF-A8 2

HDD fan wall Noctua FF-F12 (3000PWM) 3

HDD screws Supermicro MCP-410-00005-0N 2 bags

HDD SSD cage Supermicro MCP-220-84603-0N 1


Any thoughts or suggestions are most welcome :) looking forward to taking on this journey!
 
Last edited:

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
This looks like a pretty overkill cpu for what it sounds like your use case is. I thought my entry level Xeon two years ago was overkill.

What's your use case? You can probably bump down to less than half the cost CPU wise and still have plenty of breathing room.

I might as well be the first to ask: What kind of drive configuration will you have? What kind of storage do you need? It sounds like it won't be IOPS heavy, so you can probably lean towards maximizing your drive capacity rather than performance.
 
Joined
Apr 9, 2015
Messages
1,258
You may want to look for a board that has the SAS controller built in. If it's the same cost as a board plus SAS HBA or even just a little less it's one thing that is a little less complicated. If you do get a board with onboard SAS just remember you may need a reverse breakout cable for the SAS expander.

You could easily drop down to a slower CPU without issue unless you want to stream/transcode a lot of movies at the same time. You are planning to backup your blurays but you don't mention using Plex or something like it for streaming. I honestly love it and I have been very lucky that my smart TV has an app for it and gets updated even though it is over two years old.

Otherwise things look fine, just remember that you will need to purchase coolers for the CPU's. They are not included with the board or case and don't come with the cpu like a lot of desktop CPU's. Just pick up some used 4U supermicro coolers on ebay and it should be fine.

I don't know if 10G networking will do a ton of good unless you have a bunch of clients using it at the same time or another computer with a 10G NIC.

As far as drives the Red's are more than fine, you may find the HGST NAS drives for the same or less but they will generate more heat and use a little more power. I used the HGST 4TB version myself. You will have 24 Bays to fill. I would probably do a 12 drive RaidZ3 to begin with using 6TB drives and then expand later on. That will give you a little under 54TB of storage space to begin with and you can add a second vDev to the pool later or do two now and swap out drives later to expand the pool if you need that much space. Two vDev's in the pool of 6TB drives in RaidZ3 would net you about 108TB minus overhead. You could do a RaidZ2 and have more space but you have to remember that with two vDev's the loss of a vDev nukes the pool. Once drives get over a certain size you have to plan that at least one will go down with a resilver with a good chance of a second one dropping out killing your data.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Nice build, very similar to mine. Get a supermicro CPU cooler either the one I got out the passive one and make sure to keep the plastic cover in the case.

Sent from my Nexus 5X using Tapatalk
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Mattbbpl
I did consider going a slightly slower cpu but I am wanting to "future proof" the best I can. I'm hoping this storage server will last me quite a while. My primary use for the server is to store files and also to serve them. There will be 5 other computers as well as 3 Openelec htpc boxes all connected.
I was looking at doing a couple of vDev's with 6 drives in each setup as Raid Z2

nightshade00013
I was considering a board with onboard SAS controller but decided on the 1015 due to many people on here having good success (especially after flashing it).
Going to run a 10G network as I have the cabling being dropped in my new place and currently am in the process of upgrading all HTPC's and desktop pc's to the Intel X540-T1.
I was planning to go with 6 drives in each vDev in Raid Z2 config? I've done a lot of research over the past few weeks/months and apparently multiple Raid-Z2's with either 6 or 10 disks per vDev is recommended as it ofers high capicity with an acceptable rebuild time in case of disk failures.
Also, running 4 x 6 disks Raid Z2 instead of 2 x 10 I'll have the same capicity but twice the iops.

SweetAndLow
Thanks! I will most definitely look into that cooler. Do you know if it is noisey? I was looking at the Noctua NH-U9DX i4
 
Joined
Apr 9, 2015
Messages
1,258
Just be careful with that many vDev's any one can be a point of failure. And really the iops won't matter much unless you have a large number of users and workload with many files being accessed at the same time. From the sounds it's going to be used at home with 3 to 4 users max so you will see minimal utilization.

As far as the sas controller that is separate or on board they can both be flashed to IT mode the same as any other LSI based on the correct chipsets. Using an X8DT6 board I flashed the sas2008 to IT mode and it is literally the same as the M1015. The onboard version in a comparable board to the one you listed is just a newer version of it that is SAS3 capable instead of SAS2.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Mattbbpl
I did consider going a slightly slower cpu but I am wanting to "future proof" the best I can. I'm hoping this storage server will last me quite a while. My primary use for the server is to store files and also to serve them. There will be 5 other computers as well as 3 Openelec htpc boxes all connected.
I was looking at doing a couple of vDev's with 6 drives in each setup as Raid Z2

nightshade00013
I was considering a board with onboard SAS controller but decided on the 1015 due to many people on here having good success (especially after flashing it).
Going to run a 10G network as I have the cabling being dropped in my new place and currently am in the process of upgrading all HTPC's and desktop pc's to the Intel X540-T1.
I was planning to go with 6 drives in each vDev in Raid Z2 config? I've done a lot of research over the past few weeks/months and apparently multiple Raid-Z2's with either 6 or 10 disks per vDev is recommended as it ofers high capicity with an acceptable rebuild time in case of disk failures.
Also, running 4 x 6 disks Raid Z2 instead of 2 x 10 I'll have the same capicity but twice the iops.

SweetAndLow
Thanks! I will most definitely look into that cooler. Do you know if it is noisey? I was looking at the Noctua NH-U9DX i4
Whoa there, with that case nothing is quiet. If quiet is a thing you need a completely different case. Mine is under the house in the crawl space on the lowest fan setting and can still be heard.




Sent from my Nexus 5X using Tapatalk
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
nightshade00013
Thanks for the suggestion. Would you recommend any of the following setups?
  • 3 x 8 disks Raid Z2
  • 2 x 10 disks Raid Z2 (does that mean I would have 4 spares?)
  • 2 x 11 disks Raid Z3 (2 spares)
So, if a vDev goes down, does that mean the whole pool is then corrupted and needs to be rebuilt from the ground up? Is there a disadvantage going to Raid Z3 performance wise?

Also, the backplane I am using is SAS2 so there wouldn't really be a benefit going with an onboard sas2008?

SweetAndLow
Sorry, I already understand the case isn't that quiet. What I was actually referring to is if I can not add to the already noisey case! :p

This may be a stupid question, do I have to run the server 24/7? There will be instances when I don't need to have the server on so is it a bad idea shutting it down and only really using it when required? I think I read somewhere that it is not recommended to shut the server down?

Also updated original post to reflect 2 x 32GB ram
 
Last edited:
Joined
Apr 9, 2015
Messages
1,258
Personally if I went up to 6TB drives I would only run a RaidZ3 and with multiple vDev's or a plan for one in the future I would run a RaidZ3 even with 4TB drives. You could also do a RaidZ2 with a hot spare which is basically very close in redundancy as a RaidZ3. Unless the plan is for 10TB drives or larger I would not worry about a hot spare with RaidZ3.

As I said before I would run two 12 drive RaidZ3 vDev's, Starting with one on the build and adding a second later on when I need to expand.. That is just my look on things with the space available.

The answer to your question about losing the entire pool if you lose a vDev the answer is yes, if you have a pool of 5 vDev's and one goes out the whole thing is gone.

Performance wise a RaidZ3 has very little difference to a RaidZ2 as long as it has one more disk compared to a RaidZ2. If you have a 6 drive RaidZ2 you would want to have a 7 drive RaidZ3 to have the same performance. Writes with RaidZ3 could be just slightly slower due to the need to calculate parity but in all honesty since it is a software raid and you have a powerful cpu it should be no problem. If you want to see a BASIC idea of the performance check out https://calomel.org/zfs_raid_speed_capacity.html
A snippet of data comparison:
Code:

12x 4TB, raidz3 (raid7),       33.6 TB,  w=452MB/s , rw=105MB/s , r=840MB/s

11x 4TB, raidz3 (raid7),       30.2 TB,  w=552MB/s , rw=103MB/s , r=963MB/s
22x 4TB, 2 striped 11x raidz3, 60.4 TB,  w=567MB/s , rw=162MB/s , r=1139MB/s


And there is some lacking in the information I linked but you can see how things kind of work together. I wish they had a 11 drive RaidZ2 to compare to a 12 drive RaidZ3 but you can see the kind of throughput you would get with what I suggest which as close as I could is extrapolated out with two vDev's. Plus with FreeNAS doing it's magic the clients will have no issues even if you have a SSD writing to the pool and reads will be VERY fast. My 7 drive RaidZ3 pool reads around 500MBps, I have some stuff that I transfer from one dataset to another and I made a script so that any file put in a particular folder in one dataset is transferred internally to another dataset at 15 minute intervals rather than doing it across the network at SMB speeds.

There will not be a speed advantage right now with the SAS expander but having one less part that could potentially shift is always a good thing. We could drive cars with four small engines, one on each wheel but they would be MUCH more complex to maintain and deal with. Plus it is possible to have a cost savings over two separate boards and to have the two builds break even in cost the simpler build wins IMHO. Plus the one built on the board is SAS3, so you are getting something that is a little newer at the same cost.

You do not absolutely need to have the FreeNAS on 24/7 but boot up will take about five to ten minutes and the drives honestly want to be running 24/7. And if you have Plex in use and are away from home and for some reason decide to stream something if the server is off you either have to VPN to home and power it up via IPMI and then wait a while till everything is running. Since you take pictures they could also be served via Plex and if you are out doing something and someone wants to view a little of your work it's as simple as pulling out your phone. I do it all the time when I am talking with someone about driving the 101 headed to San Francisco or coming back home through Nevada, Utah and Colorado. I keep a couple pictures on my phone but the majority are on my server.

Trust me when you are having a conversation and want to show someone something it had better be there when you pull out your phone or tablet not ten minutes from now. A lot of people still can not wrap their head around how a phone with 64GB of storage can have access to more than 5 TB of movies, tv shows, music and pictures.
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Personally if I went up to 6TB drives I would only run a RaidZ3 and with multiple vDev's or a plan for one in the future I would run a RaidZ3 even with 4TB drives. You could also do a RaidZ2 with a hot spare which is basically very close in redundancy as a RaidZ3. Unless the plan is for 10TB drives or larger I would not worry about a hot spare with RaidZ3.

As I said before I would run two 12 drive RaidZ3 vDev's, Starting with one on the build and adding a second later on when I need to expand.. That is just my look on things with the space available.

The answer to your question about losing the entire pool if you lose a vDev the answer is yes, if you have a pool of 5 vDev's and one goes out the whole thing is gone.

Performance wise a RaidZ3 has very little difference to a RaidZ2 as long as it has one more disk compared to a RaidZ2. If you have a 6 drive RaidZ2 you would want to have a 7 drive RaidZ3 to have the same performance. Writes with RaidZ3 could be just slightly slower due to the need to calculate parity but in all honesty since it is a software raid and you have a powerful cpu it should be no problem. If you want to see a BASIC idea of the performance check out https://calomel.org/zfs_raid_speed_capacity.html
A snippet of data comparison:
Code:

12x 4TB, raidz3 (raid7),       33.6 TB,  w=452MB/s , rw=105MB/s , r=840MB/s

11x 4TB, raidz3 (raid7),       30.2 TB,  w=552MB/s , rw=103MB/s , r=963MB/s
22x 4TB, 2 striped 11x raidz3, 60.4 TB,  w=567MB/s , rw=162MB/s , r=1139MB/s


And there is some lacking in the information I linked but you can see how things kind of work together. I wish they had a 11 drive RaidZ2 to compare to a 12 drive RaidZ3 but you can see the kind of throughput you would get with what I suggest which as close as I could is extrapolated out with two vDev's. Plus with FreeNAS doing it's magic the clients will have no issues even if you have a SSD writing to the pool and reads will be VERY fast. My 7 drive RaidZ3 pool reads around 500MBps, I have some stuff that I transfer from one dataset to another and I made a script so that any file put in a particular folder in one dataset is transferred internally to another dataset at 15 minute intervals rather than doing it across the network at SMB speeds.

There will not be a speed advantage right now with the SAS expander but having one less part that could potentially shift is always a good thing. We could drive cars with four small engines, one on each wheel but they would be MUCH more complex to maintain and deal with. Plus it is possible to have a cost savings over two separate boards and to have the two builds break even in cost the simpler build wins IMHO. Plus the one built on the board is SAS3, so you are getting something that is a little newer at the same cost.

You do not absolutely need to have the FreeNAS on 24/7 but boot up will take about five to ten minutes and the drives honestly want to be running 24/7. And if you have Plex in use and are away from home and for some reason decide to stream something if the server is off you either have to VPN to home and power it up via IPMI and then wait a while till everything is running. Since you take pictures they could also be served via Plex and if you are out doing something and someone wants to view a little of your work it's as simple as pulling out your phone. I do it all the time when I am talking with someone about driving the 101 headed to San Francisco or coming back home through Nevada, Utah and Colorado. I keep a couple pictures on my phone but the majority are on my server.

Trust me when you are having a conversation and want to show someone something it had better be there when you pull out your phone or tablet not ten minutes from now. A lot of people still can not wrap their head around how a phone with 64GB of storage can have access to more than 5 TB of movies, tv shows, music and pictures.

Wow! What a great post. So much great information available to read and you answered quite a lot of questions I had ;)

I do have another question, in my research they say for 4k drives RaidZ3 should be in the following configuration. This is from here https://forums.anandtech.com/threads/zfs-raidz-question.2353232/#post-35760300

RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives

Therefore would it not be more beneficial to run a RaidZ3 in 2 x 11 disk vDev with 2 hot spares?
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives
Therefore would it not be more beneficial to run a RaidZ3 in 2 x 11 disk vDev with 2 hot spares?

This is old information. That is no longer applicable to the creation of vdevs and parity. That used to be the case a while ago but is not anymore. You can run any number of disks in any raidzX configuration so long as you adhere to the minimums and limits (i.e. you need more than 2 drives for a raidz2, it is not recommend to go wider than 11-12 disks per vdev, etc.)
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
This is old information. That is no longer applicable to the creation of vdevs and parity. That used to be the case a while ago but is not anymore. You can run any number of disks in any raidzX configuration so long as you adhere to the minimums and limits (i.e. you need more than 2 drives for a raidz2, it is not recommend to go wider than 11-12 disks per vdev, etc.)

Thanks. Might give the 2 x 12 RaidZ3 vDev's a go
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
I have read with the motherboard that it prefers to run on 4 sticks of ram....

So, is it overkill considering my usage case to have 4 x 32GB ram or should I go 4 x 16GB ram? I do understand that ZFS loves ram just don't want to have way more than I actually need!
 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
I'm sure 4x16GB will be plenty unless you are planning a 80-100TB+ pool

also just make sure your ram in compatible with the board, RDIMM vs UDIMM etc
 
Last edited by a moderator:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I have read with the motherboard that it prefers to run on 4 sticks of ram....

So, is it overkill considering my usage case to have 4 x 32GB ram or should I go 4 x 16GB ram? I do understand that ZFS loves ram just don't want to have way more than I actually need!
The motherboard has 8 RAM slots. So 16GB modules will max out at 128GB if you fill it. That seems pretty reasonable to me. Go with 4x16GB to start, that is what I did. I now have 8x16GB because there was a sale on RAM.

Sent from my Nexus 5X using Tapatalk
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
I'm sure 4x16GB will be plenty unless your planning a 80-100TB+ pool

also just make sure your ram in compatible with the board, RDIMM vs UDIMM etc

I am planning a 80TB+ build. Hopefully, with 2 x 12 vDev RaidZ3 (using 6TB hdd's) I calculate to have 108TB? Hope my math is correct :p

I have been on the Supermicro website and confirmed compatibility. The Samsung memory modules are the recommended by Supermicro :cool:
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
I am planning a 80TB+ build. Hopefully, with 2 x 12 vDev RaidZ3 (using 6TB hdd's) I calculate to have 108TB? Hope my math is correct :p

Use @Bidule0hm calculator to work out the usable space in one vdev and double the number for two (I make it 85TB usable)

I am happy with my 16 drive bay server, and are struggling to get it filled with drives

Your server is so much better
Have Fun
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Use @Bidule0hm calculator to work out the usable space in one vdev and double the number for two (I make it 85TB usable)

I am happy with my 16 drive bay server, and are struggling to get it filled with drives

Your server is so much better
Have Fun

Thanks for the link to the calculator. Looks like I just scrape above the 80TB mark :p

I have ordered my ram and mainboard today. Can't wait for them to arrive.

What ssd would you guys recommend?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Thanks for the link to the calculator. Looks like I just scrape above the 80TB mark [emoji14]

I have ordered my ram and mainboard today. Can't wait for them to arrive.

What ssd would you guys recommend?
What is the SSD for? The most common usage is for a boot disk, get a small 30gb one for that.

Sent from my Nexus 5X using Tapatalk
 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
Maybe not that small, 64 or 128GB cheap enough these days and then you'll have plenty of space for the future :)
 
Status
Not open for further replies.
Top