Thoughts on my new NAS build

Status
Not open for further replies.

Nidhogg

Dabbler
Joined
Sep 22, 2012
Messages
12
Hey!
I am in the process of getting fibre installed into my house, which will happen sometime in the next few months, so I have started to sort out my home network the way I want it.
We have two buildings, a house and a garage (big one, can fit 8 cars), and I will set up a server rack in one of the two buildings, and connect them with another fibre cable. I am migrating to everything Unifi for switches, cameras, etc, the only thing that will stay is the Asus router until Unifi fixes their VPN support.
Anyway, I´ve been looking to buy a used, refurbished Supermicro server, and I found this one that I´m interested in:
http://www.ebay.com/itm/Supermicro-...e-64GB-Rail-/132299864219?hash=item1ecdafb89b

It features two Intel Xeon 8-core CPU´s, 64GB DDR3, redundant 920W PSU, and Quad Intel Gigabit ports.
In this I would put 12x WD RED 4TB disk, using RaidZ2, with one pool being 8 disks and the other being 4.

The first pool would be used for non-critical stuff I could easily afford to loose where I just want as much storage as possible, and the other would be for really important stuff like movies and pictures of my children, documents and contracts, etc.
I currently have an old Mini-ITX NAS in a Fractal Design case, with 6x 3TB disks, and that will be placed in the other building to where the main NAS would be placed, for semi-offsite backup (around 10m between the buildings). I will keep one more disk at work for the really, really important stuff for true offsite backup, but the other NAS will be used for either daily or weekly backup, while the disk at work will be updated perhaps every other month.

But anyway, any thoughts and ideas about the build?

I´m most interested in hearing what you think about the RAM size for the two pools, and also if doing one big pool and one small is the right way to go?
That will leave me with 24TB usable storage for the big one and 8TB for the small one if I have calculated correctly, which will be good for my other NAS for backup, since that one has 12TB usable storage, so they will match pretty good.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hey!
I am in the process of getting fibre installed into my house, which will happen sometime in the next few months, so I have started to sort out my home network the way I want it.
We have two buildings, a house and a garage (big one, can fit 8 cars), and I will set up a server rack in one of the two buildings, and connect them with another fibre cable. I am migrating to everything Unifi for switches, cameras, etc, the only thing that will stay is the Asus router until Unifi fixes their VPN support.
Anyway, I´ve been looking to buy a used, refurbished Supermicro server, and I found this one that I´m interested in:
http://www.ebay.com/itm/Supermicro-...e-64GB-Rail-/132299864219?hash=item1ecdafb89b

It features two Intel Xeon 8-core CPU´s, 64GB DDR3, redundant 920W PSU, and Quad Intel Gigabit ports.
In this I would put 12x WD RED 4TB disk, using RaidZ2, with one pool being 8 disks and the other being 4.

The first pool would be used for non-critical stuff I could easily afford to loose where I just want as much storage as possible, and the other would be for really important stuff like movies and pictures of my children, documents and contracts, etc.
I currently have an old Mini-ITX NAS in a Fractal Design case, with 6x 3TB disks, and that will be placed in the other building to where the main NAS would be placed, for semi-offsite backup (around 10m between the buildings). I will keep one more disk at work for the really, really important stuff for true offsite backup, but the other NAS will be used for either daily or weekly backup, while the disk at work will be updated perhaps every other month.

But anyway, any thoughts and ideas about the build?

I´m most interested in hearing what you think about the RAM size for the two pools, and also if doing one big pool and one small is the right way to go?
That will leave me with 24TB usable storage for the big one and 8TB for the small one if I have calculated correctly, which will be good for my other NAS for backup, since that one has 12TB usable storage, so they will match pretty good.

Unless you are planning to run a bunch of other processing on the system besides just storage, if you are please tell us, you are wasting a bunch of money on CPUs, and electricity to run them, that you don't need. Then again, if you have an 8 car garage and all, you probably have all the money you can spend.
 

Nidhogg

Dabbler
Joined
Sep 22, 2012
Messages
12
Well, I dont think I´m wasting too much money on the CPUs because its a used server, and the whole server costs only $125 more than a new motherboard from NewEgg, and now I get dual CPU´s, loads of RAM, plus a chassie with redundant PSU´s as well.
The price for one CPU makes the total roughly the same, so the extra CPU, RAM, PSU and chassie come "for free".

Could I do fine with a less powerful CPU? Of course, but if I cant find a similar configuration, used, with less power hungry CPU´s then I guess I´ll have to stick to that one.
And this will be used in a home enviroment, so while the electricity is something to be concerned with, and I agree that two 95W TDP CPU´s is too much, getting my daughter to remember to turn off the tv when done watching will probably save me more than an extra CPU.
So while not irrelevant in any way, its not the biggest priority for me.

Regarding the garage, I´m sorry to see it was something that apparently struck a nerve with you, but I was not trying brag about how rich I am (which I most definitely am not), the garage was built in 1958, and we live out on the country side some 7 miles away from town, and the garage was built as an industrial real estate back in the days, so while I´m lucky to have a big garage where I can work on my hobby car, its not really fancy in any way.
My only point in bringing up the garage was to show that I have ample storage for a server rack in there. Thats all.
Most of the times when I mention keeping a server in the garage people think of an carport or one car garage.

I appreciate your comment, and the electricity IS something I´m looking at, but the size of my garage is irrelevant to my question, I just want some thoughts about my build from the competent and experienced in this forum.
If you know of a server that´s cheaper than the one I found, with less CPU´s, then I´m all ears. Thanks.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Sounds to me that a dual CPU configuration would be too much. Have you considered maybe an Atom based system as these consume very little. Also, newer Skylake and Kaby CPUs are more efficient than their Sandy Bridge counterparts.

There are plenty of rack mount cases that will easily accommodate 12 drives, some of the Atom boards will support 14 directly attached drives without the need of a HBA.

I suggest you take a look at the hardware recommendations guide to give you a better idea of what you need to have a stable FreeNAS server.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you know of a server that´s cheaper than the one I found, with less CPU´s, then I´m all ears. Thanks.
If I were buying a server, I would buy this one: http://www.ebay.com/itm/Supermicro-...0-6GB-RAM-2x-900w-3x-8port-SATA-/232464651785
It has not got enough memory, but that can be upgraded to 96GB easily enough. The only thing I have a question about is the disk controller cards. He doesn't say what kind they are, so I would ask and be prepared to replace them.
There are airflow advantages to a 4U system over a 2U system and it gives you extra drive bays for future expansion. If you are not pressed for space, go with the bigger chassis and plan to keep it. If you find you need more horsepower, you can always drop a newer system board and CPU in there later.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
In this I would put 12x WD RED 4TB disk, using RaidZ2, with one pool being 8 disks and the other being 4.
I would just do a single storage pool and you can break it into two vdevs of six drives each using RaidZ2.
You would still use 12 drives at 4 TB each but there is no need to break them into separate pools. You can just subdivide the space in software if you feel the need. It would give you about 22 TB of practical usable storage. I estimate actual of 28 but there is a reserve space you can't cut into without degrading performance.
How much space did you need?
Additional advantage to the 24 bay chassis, you can add another 6 drives if you want more space. It is best to keep all the vdevs in a pool the same and you can add a vdev to a pool later. Unless you need significantly different performance from one pool than what you expect from the other, there is no reason to have two different pools. I did that in the previous iteration of my current system having one pool for VMs and another for data, but in my light use scenario it just didn't need to be. I moved the couple VMs that I had into the main pool and pulled out the drives I had for the other pool and sold them.
 

Nidhogg

Dabbler
Joined
Sep 22, 2012
Messages
12
Sounds to me that a dual CPU configuration would be too much. Have you considered maybe an Atom based system as these consume very little. Also, newer Skylake and Kaby CPUs are more efficient than their Sandy Bridge counterparts.

There are plenty of rack mount cases that will easily accommodate 12 drives, some of the Atom boards will support 14 directly attached drives without the need of a HBA.

I suggest you take a look at the hardware recommendations guide to give you a better idea of what you need to have a stable FreeNAS server.

Well, I´m open to any CPUs, I just thought I would look for used hardware to keep the costs down, and therefore my hands are somewhat tied to whats available. Im not discarding a new system, if its better to buy new to get better, upgraded stuff then sure, Im up for it :)

Thanks for the guide, I was in there a couple of days ago but missed the Download button, so I have read through it, lots of good stuff

If I were buying a server, I would buy this one: http://www.ebay.com/itm/Supermicro-...0-6GB-RAM-2x-900w-3x-8port-SATA-/232464651785
It has not got enough memory, but that can be upgraded to 96GB easily enough. The only thing I have a question about is the disk controller cards. He doesn't say what kind they are, so I would ask and be prepared to replace them.
There are airflow advantages to a 4U system over a 2U system and it gives you extra drive bays for future expansion. If you are not pressed for space, go with the bigger chassis and plan to keep it. If you find you need more horsepower, you can always drop a newer system board and CPU in there later.

That one also looks good, but it too has dual CPUs? Although they are lower TDP (80w vs 95W I think), but wouldn´t that still be overkill as well?
Space is no problem, I´ve probably got an 42U unit coming that a friends doesnt use, and it will only house the NAS, router, switch, patchpanel, KVM, and one or two VM´s and perhaps a seedbox. So 4U is better for airflow as you say.
But it looks good, I will think about buying it, thanks :)

I would just do a single storage pool and you can break it into two vdevs of six drives each using RaidZ2.
You would still use 12 drives at 4 TB each but there is no need to break them into separate pools. You can just subdivide the space in software if you feel the need. It would give you about 22 TB of practical usable storage. I estimate actual of 28 but there is a reserve space you can't cut into without degrading performance.
How much space did you need?
Additional advantage to the 24 bay chassis, you can add another 6 drives if you want more space. It is best to keep all the vdevs in a pool the same and you can add a vdev to a pool later. Unless you need significantly different performance from one pool than what you expect from the other, there is no reason to have two different pools. I did that in the previous iteration of my current system having one pool for VMs and another for data, but in my light use scenario it just didn't need to be. I moved the couple VMs that I had into the main pool and pulled out the drives I had for the other pool and sold them.

At the moment I dont really "need" so much space, for starters it will only need to fit what I have in my current NAS (12TB total storage, 81% used), but the run-time on those HDD´s is 4y 3m 14days now, so they are due to start experiencing failure any time now.
And, since I am above the 80% threshold, I figured I might as well build a new system.

If I am using several vdev in the same pool, doesnt that mean that if I get a problem with one vdev (say, for example, 3 drives fail in one vdev, however unlikely) it would mean that the entire pool is lost?
But if I keep the pools seperate, that means if I loose 3 disk in one vdev/pool, the other pool is unaffected?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
If I am using several vdev in the same pool, doesnt that mean that if I get a problem with one vdev (say, for example, 3 drives fail in one vdev, however unlikely) it would mean that the entire pool is lost?
But if I keep the pools seperate, that means if I loose 3 disk in one vdev/pool, the other pool is unaffected?
Correct (assuming you are using RaidZ2). But that configuration has other issues like not being able to share data between the 2 pools etc. It's a storage appliance and there is no guarantee as to what order the drives and how many will fail at a given time.

You should go with what is the best option for you. For me, I just use mirrored vdevs in a single pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If I am using several vdev in the same pool, doesnt that mean that if I get a problem with one vdev (say, for example, 3 drives fail in one vdev, however unlikely) it would mean that the entire pool is lost?
But if I keep the pools seperate, that means if I loose 3 disk in one vdev/pool, the other pool is unaffected?
Nobody ever knows when or if a drive will fail until it fails.
I work in an IT position in my organization where I am responsible for the oversight of several servers that have between them around 600 TB of storage. I have had to replace only eight drives (so far) this year but I have one more I am looking at closely. Drives do fail, but if you replace them in a timely manner, a drive failure should not destroy your data.
I usually replace a drive as soon as it gives me strong indication of an impending failure and I count bad sectors as a strong indication.
Of the failed drives, six of them were still in warranty.
Three were 4TB Seagate Constellation drives.
Two were 4TB WD Red Pro drives.
Two were 4TB WD Red drives that were not in warranty.
One was a 6TB WD Red Pro.
The WD Red drives fail more often, in my experience, than the Seagate drives. Which is why I use Seagate drives in my home NAS.
The configuration of my NAS is in my signature and it has 12 drives in 2 vdevs of 6 drives each using RAID-Z2. I use that configuration and I recommended it to you. I don't make that suggestion without due consideration.

Edit: I run scripts on my servers at work and on my NAS at home that generate a daily report on the health of each drive and I look at those reports daily. I have cold spare drives on hand and when a drive makes me concerned, it gets replaced, usually in the same 12 hour period that a fault is detected.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That one also looks good, but it too has dual CPUs? Although they are lower TDP (80w vs 95W I think), but wouldn't that still be overkill as well?
The dual CPU in the suggested chassis is coincidental. That is what happens to already be in it, but it is less of an over-kill configuration while it still offers the 4U chassis that will give you more options for cooling the CPU if you change that system board out later.
I don't like the 1U and 2U chassis because they are too thin to be very flexible in their configuration. I use 3U and 4U servers any time I have the option because they are usually more flexible in how they are configured or modified. If you have a rack-dense / processing-heavy situation, like the blade centers we use for running VMs at work, it is different for a business is all. Home users usually get no benefit from a thin server but have all the pain associated magnified by the fact that they didn't need to do it. The thinner the server, usually means, the louder the server because they have to run the fans fast to get enough air flow to keep it cool.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
The thinner the server, usually means, the louder the server because they have to run the fans fast to get enough air flow to keep it cool.
True that. I have a 1U Supermicro server with 4 bays which I got really cheap. I don't use it as a NAS currently, because I already have one. I plan to use it as a pfSense router since the CPU supports AES-NI. Anyway, the fans do screech a lot on boot, but since I am going to use it as a router, it will be on 24/7, so I don't care as much. Plus it's going to be in a server cabinet anyway.

When my current NAS is not sufficient for me, I plan on using the board & CPU from the 1U and build either a 2U or 3U server. And the current NAS board will become my pfSense router - after I upgrade my Pentium G3240 to any other processor which supports AES-NI. I will ditch the current 1U chassis at that time.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Pool failure should only be an inconvenience. You should have a backup too. Just in case.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Pool failure should only be an inconvenience. You should have a backup too. Just in case.
True. I run a full backup of everything on the primary NAS every Sunday night. In addition to that weekly, I have an hourly rsync to my backup NAS that gives me online access to data that is more recent if the primary NAS is down for some reason.
There should always be a backup, but the only time I have needed to go to my weekly is when I deleted something by mistake.
I have never lost any data from a pool failure. Although, I have had two drives in one RAID-z2 vdev fail within a couple hours of each other, still I didn't loose any data and you can replace two drives at the same time. (totally not recommended)
 
Status
Not open for further replies.
Top