Build for high-end FreeNAS

Status
Not open for further replies.

colicab

Cadet
Joined
Sep 14, 2017
Messages
2
Hi everyone,

I'm making a NAS build for our university research department. The key requirements are:
  • Long term. We have a ReadyNAS NV+ which we have been using for about 10 years. Ideally we would want to replace this with a setup that can last for the same amount of time (knowing that this quite a difficult requirement to meet :smile: )
  • Highly reliable/accessible. The NAS storage is by 30+ people on a daily basis. Researchers usually store their data/documents directly on the NAS for backup/collaborating purposes. To make sure the storage is highly reliable I think FreeNAS with ZFS is a good choice. Off course no single-point system is fail-safe. (We will make backups of the data off-site).
  • Expandable. Storage expandability is key for us as our storage needs are growing very rapidly (genetics research).

I have the following in mind:
  • CASE Sharkoon T9 Value
  • MoBo Asrock X99 Taichi (2011-3 socket, ECC, 8 DDR4 slots, 10 SATA-600 connections)
  • CPU Intel Xeon E5-2620V4 (8C/16T)
  • RAM CT16G4RFD824A x2 (DR4 PC4-19200 • CL=17 • Dual Ranked • x8 based • Registered • ECC • DDR4-2400 • 1.2V16GB DDR4 ECC, 32GB in total)
  • SSD (Boot) Samsung 850 EVO (250GB)
  • HDD (Storage) WD Red NAS WD80EFZX 8TB x3
  • POWER Corsair RMx Series RM650x
  • LAN PCI ASUS PEB-10G/57811-1S (10 Gigabit SFP+)
  • HOT SWAP BAYS SilverStone FS304 x 2 (4-in-3 5.25" tray less hot swap bay cages)

To provide some context about the hardware choices:
  • Form factor is not an issue so I didn't go for a typical NAS case, opening the possibility for ATX motherboards.
  • Adding a 10GBit adapter made sense to ensure high bandwidth accessibility for a multi-user environment. We have access to 10Gbit fibre switches.
  • For expansion purposes I choose an ATX motherboard with 10 SATA connections. The build is also expandable memory wise to keep up with the needs of ZFS.
  • I had a RAIDZ1 array in mind, yielding an initial storage capacity of ~14.5TB with 3 WD RED 8TB disks. That's enough for our storage needs for the foreseeable future but gives us the possibility to expand to 50+TB.
  • Based on what I read about ZFS memory needs 32GB ECC RAM would suffice for the 14TB array and initial storage expansions. The additional RAM slot still give the possibility to increase memory capacity if need be. BTW I'm not intending on using deduplication: it seems more affordable to add disks than expanding the ECC memory for deduplication functionality.
  • I opted for hot swap bays for the ease of replacing/adding drive.

As it's my first (Free)NAS build I would gladly appreciate any feedback/remarks. Feel free to suggest any modifications. In particular I wonder the following:
  • Is the 8-core CPU overkill here? I could go a little cheaper with a quad core but I feel I don't want to downsize too much as this build should run smoothly over a long period of time.
  • Perhaps the same question for the RAM. The ATX MoBo can handle 128GB RAM (8 x 16GB). To maximise expansion possibilities I thought to start straight away with 16GB DDR4 modules.
  • Is there a difference in performance using only 2 of the DDR4 slots instead of spreading the RAM modules more evenly over the available slots?
  • Is 650W power enough to keep up with possible expansions?
  • Cost is not the major concern. However I would be interested to know if something in my build seems obsolete or overkill to reduce the overall cost.

Thank you in advance for any feedback!

Kind regards
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Your choice of motherboard is not going to be good for a server, it has features such
as, wireless, sound, blue tooth. these components are worthless for a server!
Your desire of future expansion and reliability would best be served with a true
server grade board designed for that purpose. Check our resources section for
Hardware Recommendations and find better suggestions for building a quality
machine that your department can be proud of.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
High reliability and dubious ECC is not a good mix. Also, trust me when I say that you do not want an ASRock X99 board for a Xeon E5. The experience is less than stellar.

Is the 8-core CPU overkill here?
Yes. Very much so in price.

Perhaps the same question for the RAM.
Well, 64GB is probably enough for a department file server.

Is there a difference in performance using only 2 of the DDR4 slots instead of spreading the RAM modules more evenly over the available slots?
Theoretically yes, but it's inconsequential here.

Is 650W power enough to keep up with possible expansions?
Up to some 14-15 HDDs, probably.

LAN PCI ASUS PEB-10G/57811-1S (10 Gigabit SFP+)
Broadcom. Definitely not a very good choice. You'd want Intel or Chelsio.

I had a RAIDZ1 array in mind, yielding an initial storage capacity of ~14.5TB with 3 WD RED 8TB disks.
RAIDZ2 would be much better for reliability.

  • MoBo Asrock X99 Taichi (2011-3 socket, ECC, 8 DDR4 slots, 10 SATA-600 connections)
  • CPU Intel Xeon E5-2620V4 (8C/16T)
  • RAM CT16G4RFD824A x2 (DR4 PC4-19200 • CL=17 • Dual Ranked • x8 based • Registered • ECC • DDR4-2400 • 1.2V16GB DDR4 ECC, 32GB in total)
Back to the platform: You're better off with with a Supermicro X11SSM-F and one the Xeon E3s with hyperthreading. You'd be limited to 64GB, but that's going to be plenty for the ~50TB.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
  • Long term.
  • Highly reliable/accessible.
These choices increase costs because you will have to spend more to meet these requirements. Better MB, PS, HDD, etc...
  • MoBo Asrock X99 Taichi (2011-3 socket, ECC, 8 DDR4 slots, 10 SATA-600 connections)
Not a serious server motherboard, IMHO.
  • HDD (Storage) WD Red NAS WD80EFZX 8TB x3
These drives maybe good for home, but for SERIOUS storage, you should go enterprise drives. Brand is not so much a concern, but they will last a LOT longer and have fewer problems. You might be surprised that sometimes Enterprise drives are not much more than the consumer ones. Watch for prices.
  • POWER Corsair RMx Series RM650x
Get a high quality GOLD power supply. You want a 5-7 year warranty minimum. Maybe this one is, I did not look. A 10 year warranty might be interesting, but watch cost... Platinum are usually a waste. 2 PS NAS system increases availability...
  • LAN PCI ASUS PEB-10G/57811-1S (10 Gigabit SFP+)
If this is a serious server, get 2. One primary and a backup or you can LAGG them, etc. Or, buy 1 - 10G and 1 - 1G (Intel).

  • I had a RAIDZ1 array in mind, yielding an initial storage capacity of ~14.5TB with 3 WD RED 8TB disks. That's enough for our storage needs for the foreseeable future but gives us the possibility to expand to 50+TB.
RAIDZ2 is minimum. With high quality drives Z2 and backups will keep you protected fairly well. If you want to run as close as no-matter-what, then go RAIDZ3.

WD Red's have a 3 year warranty on them. Enterprise will have 5 year and better quality build. See the Disk Price/Performance Spreadsheet for some insight.

  • Is there a difference in performance using only 2 of the DDR4 slots instead of spreading the RAM modules more evenly over the available slots?
Yes, but file server not that big of a deal, IMHO.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
  • CASE Sharkoon T9 Value
  • MoBo Asrock X99 Taichi (2011-3 socket, ECC, 8 DDR4 slots, 10 SATA-600 connections)
  • CPU Intel Xeon E5-2620V4 (8C/16T)
  • RAM CT16G4RFD824A x2 (DR4 PC4-19200 • CL=17 • Dual Ranked • x8 based • Registered • ECC • DDR4-2400 • 1.2V16GB DDR4 ECC, 32GB in total)
  • SSD (Boot) Samsung 850 EVO (250GB)
  • HDD (Storage) WD Red NAS WD80EFZX 8TB x3
  • POWER Corsair RMx Series RM650x
  • LAN PCI ASUS PEB-10G/57811-1S (10 Gigabit SFP+)
  • HOT SWAP BAYS SilverStone FS304 x 2 (4-in-3 5.25" tray less hot swap bay cages)
You have already received some great advice above. My two cents is as follows.
First, you are going to want an actual server chassis that includes hot swap drive bays and most server chassis will come with hot swap power supplies also. If you want a real 'high availability' server, you want to start with dual power supplies. It doesn't happen as often any more, but I have seen it a few times where a power supply will take a system down, but if you have two power supplies, it gives you some protection from that.
Next, the 8TB drives are great value (price per TB) but you need to have a minimum of RAIDz2, so you are going to need a minimum of 4 drives. If price is not the problem, go with six drives to start.
Next, you have already been advised about the system board and processor. The one you picked is more of a gaming system board and you should definitely stay far away from it as they are not really built for the long haul. Pretty much any system board that includes audio and wireless is not what you want for a server. The Supermicro boards are almost all worth looking at but keep in mind that even Supermicro makes some boards that are not really intended for use in a server.
In your place, I would look at the Helium filled HGST drives. They are reputed to be fairly good and the value is reasonable. Then you should also look seriously at using a SAS HBA to run the drives instead of worrying about how many SATA ports are on the system board. If you get a proper server chassis, the drive backplane may well have a SAS Expander built in so you can run 24 (or more) drives from one SAS HBA.
Also, you need to have a pair of drives dedicated to booting the system that are not part of the storage. The FreeNAS operating system will work fine from a 8GB USB stick but I suggest (for long term reliability) a pair of small capacity (32 GB) solid state drives. They don't need to be fast, just reliable. They can plug into the SATA connectors on the system board that you won't need once you have all your data drives connected to the SAS HBA.
From what you said in your initial post, it looks like you didn't do much research yet. Building a server is a big deal and you need to put more time into the research before you make mistakes that will require you (or someone else) to do it over again in a year or two.
In most environments where the cost is really not a concern, the organization will buy something that is ready made instead of putting parts together because then you have a vendor warranty to turn to. Some company like Dell/EMC can sell you a system that you can count on working without fail and they will warranty it for up to 5 years and sell you a support contract for even longer if you have the money to pay for it.

You might want to look at something like this: http://www.dell.com/en-us/work/shop/povw/poweredge-t330
 

colicab

Cadet
Joined
Sep 14, 2017
Messages
2
First of all, thank you all for your helpful input!

In the mean time I went over the HW recommendations suggested here and in the forum's guidelines.

As I was tweaking my build, I read the reply from Chris Moore. Thanks for the suggestion to go for a preconfigured server build from a HW company. I guess I was thinking only in the lines of DIY that that option hadn't crossed my mind. Anyway I'm definitely exploring this option now.

After I have more information about the server builds I'll make sure to update this thread.

Cheers.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Status
Not open for further replies.
Top