BUILD Purchasing new FreeNAS setup.. Feedback?

Status
Not open for further replies.

Art

Dabbler
Joined
Dec 30, 2015
Messages
22
I'm on the verge of purchasing this list of hardware items for a new FreeNAS build. This is replacing a Synology 412+ unit (4x3TB in SHR). The Synology will be repurposed as a secondary backup. The primary FreeNAS machine will be used for Plex (needs to transcode/stream 4k content to multiple devices), computer/data backups (some data syncing with cloud storage), central storage for users (private data), network share, ftp server, SCM (git) repository, and some use with Docker and VM machines for development purposes (not long lived instances).

My budget is 5000€ (current setup is 5500€ - so 500€ over budget) and I currently live in the Netherlands so need to find suppliers locally/in the EU.

Hardware List:

  • 1x Silverstone SST-DS380 Case
  • 1x Silverstone SST-SX600-G Power Supply
  • 1x SuperMicro X10SDV-TLN4F (XEON D-1540/1541)
  • 8x WD RED NAS PRO 6TB
  • 1x LSI 9300-8i HBA PCIe
  • 2x LSI LSI00411 SFF8643 to x4 SATA HDD
  • 4x Samsung ECC Registered DDR4 16GB M393A2G40DB0-CPB 2x Samsung ECC Registered DDR 4 32GB
  • 3x SanDisk Ultra Fit USB flash drive 32GB
  • 1x USB2.0 header pin
  • 1x USB2.0 header pin to backplate
  • 2x Samsung 850 EVO 1TB
  • 1x APC BR1500G-GR UPS

I am debating about some minor changes to this setup..

  • Should I go with 2x32 GB memory sticks instead of 4x16GB memory sticks? Are there any performance impacts versus the two configuration? I doubt I'll upgrade memory (I know more RAM the better) as I don't think it'll be that beneficial for my use case. Will be going with 2x32GB
  • Should I do a RAIDZ2 with all 8 disks, or do a RAIDZ3 with 7 disks with 1 disk hot spare? I'm leaning towards a RAIDZ2 as it's a home server and I will be backing up the important data in the cloud/backup NAS.Will do RAIDZ2 to maximize space.
  • Should I get the SuperMicro board without the 10GBE NICs as I'm not sure if I'l be utilizing 10GBE anytime soon - or keep them for future proof home networking? Keeping the 10GBE version.
  • Should I go with a lower wattage power supply, or is it better to know I'll always have enough for possible future upgrades? Staying with current 600W PS
  • I plan on doing a mirrored boot with USB, I don't plan on removing these usb sticks so thinking about doing an internal USB 2.0 header pin for this so it's not removable externally but can any problems occur from doing this? or should I get a USB backplate for the case instead? Got a backplate and an internal USB Pin header. Will see which I like better/which one fits better.
  • I don't think I need a L2ARC, but any real need to get it in the beginning or only add it later if needed? Will get this later if needed and after I upgrade memory to >64GB
  • I would like to use FreeNAS 10, but it's still a few months out from a release. Should I wait or just use FreeNAS 9.3 and upgrade in a few months when 10 is released? Will start with FreeNAS 9.3
  • Any additional general advice with this hardware configuration that I should know about?
Thanks in advance!

Art
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You shouldn't notice any speed difference from memory configurations, but as long as prices are reasonable, you should consider buying the higher density sticks, just in case your future plans change. With 64GB of RAM you're almost certainly fine even with the 16's though.

Skip the NAS Pro hard drives. They run hotter at 7200RPM and you gain very little from that. The way to make ZFS faster is to increase the pool size (increases write speeds as the pool gets full/fragmented) and to add ARC/L2ARC.

The RAIDZ decision is basically one of how much tolerance for failure you have. The RAIDZ2 of 8 disks will give more space (~36TB). The RAIDZ3 option is highly resilient but gives you much less space (~24TB with the spare).

Your chassis can't really support more hard drives, and eight drives with that board would bring you out to maybe needing around 550W. I'm not sure why you think it's oversized.

Add L2ARC later if needed. Preferably after adding more memory.

Use FreeNAS 9.3 for now.

Are you planning to run VM's off this hardware? If so, be aware that VM performance on RAIDZ is ... poor. Not like totally-unusable-poor, but you-don't-wanna-do-that-regularly poor.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
I'd go with an LSI 9211-8i HBA (or comparable SAS2008 based HBA). This means getting sff8087 breakout cables instead of the ones you have listed above.

VM performance on RAIDZ won't be great. Consider getting a pair of SSDs and making a separate zpool to host your jails / VMS. Definitely switch from Red Pro to Red.
 

Art

Dabbler
Joined
Dec 30, 2015
Messages
22
Thanks for the quick replies!

I will definitely switch to the standard WD Reds and start with FreeNAS 9.3 and will go with 2x32GB sticks for allowing future possible upgrades in RAM.

I was thinking I could get away with 450W power supply (calculated from other sources than FreeNAS), but went for the 600W to be safe. I will stay with the 600W for this configuration based on FreeNAS' calculator.

As for running VM's I don't plan on doing much there, but will need to occasionally spin up some VM instances for development/testing purposes. Performance isn't a huge deal at the moment as I can always do some performance testing in the cloud (AmazonEC2 or something) if really needed.
Does getting a pair of SSDs for the jails/VM's really needed if VM performance isn't a huge concern? or should I only get the pair of SSDs when I need more performance from the VM's? Or should I get a pair of SSDs for the jails only?

@anodos -- can you clarify why going with the LSI 9211-8i HBA is better than the LSI 9300-8i HBA? The 9300 is newer (and only 50€ more) so not sure why you are suggesting the 9211.

Thanks!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
@anodos -- can you clarify why going with the LSI 9211-8i HBA is better than the LSI 9300-8i HBA? The 9300 is newer (and only 50€ more) so not sure why you are suggesting the 9211.
The 9211 is very well tested under freenas, and you won't see a performance improvement with the 9300. Sometimes the bleeding edge makes you bleed. :) Think of it as a way of saving 50€ on your build.

I'd probably go with RAIDZ2 and enjoy the extra space. ZFS doesn't like it when you overfill your zpool. In real life, there aren't many things that will kill 3/8 disks simultaneously, but not 4/8 disks.

A single RAIDZ vdev isn't well-suited for VM workloads. I've run a couple of jails (that weren't doing much) and a virtualized Server2012R2 DC in vbox on a system with an 8-disk RAIDZ2 zpool. Performance wasn't great, but it was stable. You can typically fit a lot of VMs/jails on a mid-range SSD.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I was thinking I could get away with 450W power supply (calculated from other sources than FreeNAS),

Yup, I wrote the power supply sizing sticky because people kept coming in with numbers ranging from merely slightly-nervous-making to totally-unsuitably-small, invariably generated by some random "calculator" often written by someone who might never have considered the possibility that you'd be spinning up more than two drives at once.

It's kinda sad that I have to, really, but at least I can provide a lucid justification for my numbers and methodology. :smile:

Does getting a pair of SSDs for the jails/VM's really needed if VM performance isn't a huge concern? or should I only get the pair of SSDs when I need more performance from the VM's? Or should I get a pair of SSDs for the jails only?

No, yes, maybe.

@anodos -- can you clarify why going with the LSI 9211-8i HBA is better than the LSI 9300-8i HBA? The 9300 is newer (and only 50€ more) so not sure why you are suggesting the 9211.

Already answered by the person asked, but sticking my butt in:

The LSI 6Gbps chipsets have, in aggregate, probably over a billion hours of problem-free runtime under their belt with the firmware/driver versions FreeNAS uses.

The LSI 12Gbps chipsets have, in aggregate, something closer (at least in order of magnitude) to a million.

There is absolutely nothing wrong with getting the 12Gbps stuff as long as you understand that there's a slightly higher chance that you'll run into problems. But, for most people, the 6Gbps is quite sufficient, plus less expensive, plus more mileage, so it will probably remain the recommendation for some time.
 

religiouslyconfused

Contributor
Joined
Dec 14, 2015
Messages
184
4k transcoding is rather unknown at this point and it will definitely take a cup with a high passmark score to do so. I think 4k needs about 4-5000 passmark per transcode but not sure on that. Xeon d appears to be able to do a couple of 4k transcodes.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
4k transcoding is rather unknown at this point and it will definitely take a cup with a high passmark score to do so. I think 4k needs about 4-5000 passmark per transcode but not sure on that. Xeon d appears to be able to do a couple of 4k transcodes.
Well, as far as I can tell the only affordable option for CPU with a higher passmark score is the Xeon E5-1650.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am a big fan of the Xeon E5-1650 V3. Relatively inexpensive (compared to 26xx especially) and great performance.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
I am a big fan of the Xeon E5-1650 V3. Relatively inexpensive (compared to 26xx especially) and great performance.
I'm too lazy to price it out, but I'm curious if an E5-1650 based system would be less expensive than a Xeon-D based one. It might be another thing that the OP should look into. He'd probably have to get a different case / PSU.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The X10SDV-F is around $800.

An X10SRL plus E5-1650 v3 is around $775.

The Xeon D 1540 hits a Geekbench of around 18K-20K.

The E5-1650 v3 hits about 20K.

The Xeon D is weaker in the per-core performance category, but has better performance-per-watt so probably has reduced opex. On the other hand, extremely limited expandability. Though AsRock keeps taunting their mega board.

The E5 has been available for a year and is highly expandable.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
The X10SDV-F is around $800.

An X10SRL plus E5-1650 v3 is around $775.

The Xeon D 1540 hits a Geekbench of around 18K-20K.

The E5-1650 v3 hits about 20K.

The Xeon D is weaker in the per-core performance category, but has better performance-per-watt so probably has reduced opex. On the other hand, extremely limited expandability. Though AsRock keeps taunting their mega board.

The E5 has been available for a year and is highly expandable.
The X10SRL has 10 SATA ports, which means you don't need to get an HBA for 8 drives. This means reduced up-front cost and works a bit to even up power consumption.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, I was actually looking at them just head to head as a general purpose compute platform, my mistake. :smile:

I've actually been wrestling with the possibility of getting a Xeon D or an E5-16xx as my next desktop workstation for awhile now. The unavailability of some of the most attractive Xeon D options is ... really annoyifying.
 

Art

Dabbler
Joined
Dec 30, 2015
Messages
22
The E5-16XX series is nice (and tempting) and v4 is coming out shortly as well, but I think I will stick with the D-1540/1541 for now as I want to keep a small form factor of the build (Mini-ITX) and if I change to an E5 I'm more tempted to build a larger server but I currently don't have much available space for a larger setup. Maybe for my next NAS I'll get something bigger that can be rack mounted with 24 bay, but I think it's a far away option for me at the moment for needing that.

I started looking at some SSDs and the Samsung 850 EVO 1TB are a nice price right now and I would get 2 of them and have them mirrored for the jails/VMs. I was debating about the 512GB models, but think I'd be better off with the 1TB models if I'm going to use them for VMs as well. How much space is required typically for all your jails? I assume not much and that it also depends on your use case, but for the average user what would it be?

As for the HBA and onboard SATA, would it be better to have the 8 HD all on the HBA or do half and half with onboard SATA and the HBA? I was planning to do it all on the HBA, but not sure if the SSDs would benefit more being on the HBA than the onboard SATA as the SSD/HD are all SATA3 or if there is some other unknown reason to me to mix them between the HBA and onboard SATA.

Thanks for the additional information regarding the LSI chipsets and their dependability history aspect. I will keep that in mind before I make a final decision on that.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The HBA will be marginally slower for SSD. There is actually a PowerPC chip in there pushing stuff around. The SATA ports are a slightly better choice for SSD. No decision you make there will actually hurt you though.
 

religiouslyconfused

Contributor
Joined
Dec 14, 2015
Messages
184
Samsung makes a great ssd and my desktop has one as the boot drive. :)
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
...if I change to an E5 I'm more tempted to build a larger server ...
You say that like it's a bad thing. :) Full ATX towers make excellent end-tables, and a half-rack is an excellent conversation piece. Set some flowers on top of it and people might even mistake you for an interior decorator.

I started looking at some SSDs and the Samsung 850 EVO 1TB are a nice price right now and I would get 2 of them and have them mirrored for the jails/VMs. I was debating about the 512GB models, but think I'd be better off with the 1TB models if I'm going to use them for VMs as well. How much space is required typically for all your jails? I assume not much and that it also depends on your use case, but for the average user what would it be?
For me, not much. 150GB including virtualbox VMs. When I need lots of storage space for a jail, I just nullfs mount a dataset from my main zpool.... BUT I've never heard anyone complain that they have too much SSD storage. In that way it's kinda like beef jerky.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do note that many of the same rules apply to SSD as HDD; you want to have a large pile of free space if possible, and things will be faster.
 

Art

Dabbler
Joined
Dec 30, 2015
Messages
22
Okay -- I decided to stay with the 9300-8i for possible future migration to a newer system when needed. Even though the 9300 doesn't have as much mileage in testing/use as the 9211, I'll add to the growing statistics of the 9300 card chipset use.

I also forgot to add a new UPS to my list. So by adding the SSDs and UPS I went over my budget by 500€ which in the end isn't too bad. Should get the parts by end of this week (and the case by beginning of next week). Will update once system is up and running and doing a burn-in test.

Thanks for all your help (especially @jgreco @anodos)
 
Status
Not open for further replies.
Top